
Artificial intelligence (AI) is much underutilised in finance, despite its ability to enhance forecasting, reduce errors and improve decision-making. Only 8% of finance teams use AI, versus 15% in other functions.
This is despite a review of AI in accounting showing that AI-driven models can, for example, improve revenue forecast accuracy by up to 10% compared to traditional statistical methods.
Simply understanding how AI makes decisions does not guarantee confidence in its outputs
Despite these proven benefits, finance teams remain hesitant in adopting the new technology. A common assumption is that making AI more explainable – by clarifying how it generates outputs – will automatically build trust.
However, research shows that in high-risk decisions, accuracy, reliability and performance matter more than explainability. Simply understanding how AI makes decisions does not guarantee confidence in its outputs.
To bridge this gap, trust in AI needs to be built systemically – through the continuous adaptation of governance, transparency and human oversight, ensuring alignment with business needs and evolving risks.
Supporting this effort is the Pragmatic Trust Model (PTM), which is based on a continuous cycle that tackles the real barriers to AI adoption, helping ensure AI is reliable, trusted and embedded in decision-making.
The model takes the approach of treating AI as an ‘enabler’, not just a tool – in other words, the technology is viewed as a way of gaining a competitive edge, rather than looking at it as a risk. This gives finance teams confidence in its value.
Retail example
To explain how the PTM works, let’s take a real-life example of a global retailer that has used the model to integrate AI-driven sales forecasting using sales, marketing, customer and external data.
The CFO and chief revenue officer (CRO) jointly sponsor the initiative, emphasising AI’s role in driving financial performance and sales effectiveness.
The chief data officer (CDO) chairs the steering committee to ensure robust governance and alignment with data strategy. The CDO oversees AI integration, data governance and cross-functional collaboration.
Weekly working sessions drive real-time problem-solving and alignment
The initiative is launched at the retailer’s company-wide town hall, followed by regular updates through progress reports and stakeholder forums.
The data science team validates historical data, identifying and resolving anomalies and inconsistencies to improve data quality and reliability.
The retailer implements a governance framework to:
- oversee the data, with a data trust dashboard allowing visibility of AI-generated insights, providing transparency, traceability and confidence in decision-making
- oversee the model, including continuous monitoring to detect and mitigate bias, drift and accuracy deviations, ensuring AI models remain reliable, fair and aligned with business objectives over time
- ensure regulatory compliance, with AI models adhering to industry regulations and audit standards, mitigating financial and ethical risks.
Cross-function collaboration
A dedicated AI taskforce – including finance, IT, sales, supply chain and data science functions – is set up to drive adoption, aligning AI-driven insights with business strategy, customer demand and operational execution. By integrating these diverse perspectives, the taskforce can reduce bias, enhance decision-making, and incorporate expertise from sales, finance and supply chain functions to refine the AI-driven insights.
A collaborative approach fosters shared ownership and builds trust
Weekly working sessions drive real-time problem-solving and alignment. Finance defines key variables, IT integrates AI with existing systems, and sales and supply chain account for market demand and logistics. This collaborative approach fosters shared ownership and builds trust.
Transparency
The retailer delivers AI forecasts through interactive dashboards, providing real-time insights into seasonality, pricing and economic trends. Users can drill down into forecast assumptions.
To ensure transparency and accountability, the company logs AI-related decisions, discussions and model adjustments, enabling stakeholders to track forecast evolution and business impact. A ‘human-in-the-loop’ mechanism allows leaders to review, adjust or override AI forecasts via interactive approval workflows. Override justifications are logged, ensuring transparency and continuous model refinement.
Test and trial
The retailer piloted the AI forecasting in five regions, which were selected for their varying sales patterns. Predictions, in parallel with traditional models, revealed an 18% reduction in forecasting errors, leading to improved inventory turnover, fewer stockouts and increased predictability of revenue.
These insights can help shape a scalable rollout, ensuring seamless AI adoption and long-term business impact.
Continuously review
The retailer monitors and refines AI effectiveness at every stage, from pilot testing to full-scale adoption, to ensure it remains accurate and relevant.
Benchmarking AI predictions against human forecasts helped maintain trust
The AI taskforce and steering committee regularly assess forecast performance, integrating new data, market trends and user feedback to keep AI adaptive and valuable to the business. Benchmarking AI predictions against human forecasts during trials and real-world performance post-adoption help fine-tune models, refine override mechanisms and maintain trust. An AI repository logs insights, decisions and adjustments.
By embedding AI governance and continuous learning into daily operations, organisations can ensure AI remains a dynamic, evolving asset that drives strategic decision-making and long-term growth.