Artificial intelligence (AI) is rapidly redefining the world of internal audit, promising faster analytics, deeper insights and predictive assurance (see the AB article ‘AI moves into internal audit’). However, the real test is how to harness its power while ensuring that it ultimately strengthens, rather than weakens, the trust, accuracy and accountability on which assurance depends.
Mary Shelley’s warning in Frankenstein, that unchecked creation can become the slave of its own making, rings true here, too. AI will offer long-lasting value if we can use it wisely and sustainably. But specific challenges are likely to arise.
Accelerate or eliminate?
One major danger is using AI tools to speed up outdated or inefficient audit methods instead of taking the chance to update and simplify how things are done. Rather than just making old processes faster, AI should be used to modernise and improve the internal audit workflow.
‘Doing the same, but faster’ may not necessarily enhance output quality
For example, generative AI tools are now capable of producing coherent and well-structured paragraphs from specific inputs and observations. This enables the automation of audit report production in conventional, static formats. While this approach is technically sound, reliance on AI for quicker traditional reporting may come at the cost of sacrificing more radical innovation and added value through, for example, dynamic and interactive reporting.
‘Doing the same, but faster’ may not necessarily enhance output quality or significantly improve stakeholder engagement. AI in audit offers potential for significant simplification, removal of inefficient practices and reallocation of human effort to critical tasks.
Blind spots
The often-used phrase ‘training your AI’ simply indicates that models, no matter how sophisticated, are only as reliable as the information they are fed. A significant risk is to underestimate the influence that organisational data quality plays on training internal audit AI models.
In practice, many internal audit environments rely on data drawn from complex and fragmented systems built on less-than-optimal data governance environments. This can lead to incomplete datasets, legacy platforms with inconsistent structures, or transactions misclassified due to human error. When flawed data is used as the foundation for AI-driven analysis, the resulting insights can be misleading, masking real issues while highlighting false anomalies.
Embedding data-cleansing checkpoints helps identify discrepancies early
This creates a dangerous illusion of accuracy, where automation can actually amplify embedded inaccuracies. This can lead to inaccurate observations or even model bias. In the audit context, the Orwellian ‘fading of truth’ comes to mind: when flawed data is accepted as fact, truth is progressively lost.
Internal audit functions must therefore establish rigorous data governance within their AI workflows. Conducting data lineage reviews helps trace the origin and transformation of key data elements, ensuring that inputs are complete and reliable. Similarly, embedding data-cleansing checkpoints throughout the audit cycle helps identify discrepancies early, before they pollute outputs.
Skills gaps
There is often a significant gap between internal auditors’ skillsets and the technical demands required to prompt, interpret and challenge outcomes in an AI-native environment. Also, it is not uncommon to underestimate the human and behavioural investment required in an ambitious transformative effort such as AI adoption. This may create unrealistic expectations that can ultimately promote behaviour driven by short-term results across the organisation.
Unexamined AI outputs are not worth trusting
Modern AI tools, though powerful, often function as black boxes, producing results through complex algorithms that can be difficult to understand or challenge. This can lead organisations to over-rely on AI outputs – or, conversely, dismiss valuable insights due to lack of confidence in the technology.
Stakeholders may assume that AI guarantees objectivity and accuracy, but unexamined AI outputs are not worth trusting. Human oversight and good training remains essential to validate, contextualise and communicate audit results with professional scepticism.
‘Native’ literacy
Audit teams therefore need to invest in developing a ‘native’ AI literacy. This does not necessarily mean turning all auditors into data scientists, but equipping internal audit teams with the technical skills to be able to evaluate AI outputs with appropriate critical thinking.
Ongoing learning, including practical workshops and sharing of best practice, can also demystify AI. It will enable auditors to identify new use cases (for internal audit or for the wider organisation) and focus on continuous improvement, which in turn would drive a high performing, insight-driven internal audit team culture.
As the AI journey continues, other challenges – such as how to manage ethical concerns, transparency and responsible use – will also emerge and require equally careful attention.
However, despite its pitfalls, AI as a tool for enhancing – not replacing – judgment offers immense potential to strengthen productivity and the quality of assurance. By approaching AI adoption with curiosity and a commitment to learning, teams can ensure that technology reinforces, rather than eclipses, the professional judgment at the heart of internal audit.