Artificial intelligence (AI) continues to significantly advance the automation of organisations’ processes and procedures. But although AI can improve operational efficiency and support internal audit, the limitations inherent in machine learning and the consequences of biased algorithms mean that without human supervision they are an incomplete solution that could leave organisations open to significant risk.
There are a number of reasons why AI alone may prove insufficient in fraud detection. Prominent among these is the fact that AI-driven audit analytics rely on the quality of data inputted. If data inputted to the system during the development and testing phase is incomplete, inaccurate or outdated, it will likely result in flawed analysis and conclusions.
Most AI algorithms are not designed to decode unstructured, highly complex data
Algorithm limitations
AI software helps detect unusual transactions that could indicate potential fraud by analysing and comparing them to known fraud patterns. However, it can struggle to identify sophisticated and innovative fraud. In the case of Chinese chain Luckin Coffee, in 2020 US$300m was reported as revenue through fake documents and fabricated sales figures. Despite the company using AI-driven financial analytics tools, the fraud was in fact flagged by an anonymous human whistleblower, not by the AI software.
As AI is restricted to the parameters defined while developing the software, it cannot take into account the wider business context or the implications of decisions that fall outside of the pre-defined parameters. Moreover, most AI algorithms are not designed to decode unstructured and highly complex data such as legal contracts, or financial agreements across multiple jurisdictions.
The 2020 Wirecard scandal provides an example of AI’s limitations in this area. Despite the use of financial reporting software that was apparently compliant with the established financial reporting standards, the German payment processor was able to fraudulently report ‘ghost money’ totalling €1.9bn, through a highly complex web of sophisticated fraudulent transactions involving fake documents. AI fraud systems had they been in place would not have been able to detect the irregularities involved, due to the complexity and highly unusual nature of the transactions, which deviated far from common patterns.
AI cannot assess interpersonal dynamics influencing organisational operations
Unquantifiable strategic factors such as organisational culture, management tone or philosophy, and the intricacies of stakeholder relationships also remain outside of the scope of AI.
The €200bn Danske Bank money laundering scandal, which hit headlines in 2018, provides another example of a situation where AI alone would not have been able to provide a solution, due to its inability to assess and respond to interpersonal dynamics at play. Despite multiple red flags being raised by the internal audit team and other regulatory authorities, irregularities were not reported to head office in a timely way – a failure that was subsequently attributed to a dominating tone at the top and a lack of proper governance. Red flags and control weaknesses in operational processes often come to light during an audit through close scrutiny of process controls by an experienced auditor. AI-driven audit software cannot match the intuitive mindset and experience of a human.
AI should be viewed as a blunt tool, for use only as a support and with human oversight
Keeping pace
AI software must be continuously updated to keep pace with rapidly changing regulations and standards. For these to be fully understood and accurately interpreted human judgment and expertise is required.
In the case of the NMC Health scandal in 2020, the London-listed UAE healthcare provider was able to understate debts equivalent to US$4bn despite having a sophisticated ERP financial reporting system in place. Had an AI fraud detection tool been embedded into the ERP software and used in conjunction with supervision from an experienced professional, the chances of such misreporting would have been minimised.
Given the constraints of AI and the related risks these introduce, the technology should therefore be viewed as a blunt tool, for use only as a support and with human oversight. As such, it should complement, rather than replace, the judgment and expertise of auditors in the independent assurance of organisational risk management, governance and internal controls.
More information
ACCA’s annual virtual conference Accounting for the Future includes multiple sessions on AI topics. Register to watch live on 26-28 November or on demand. Up to 21 units of CPD are available
Visit ACCA’s AI hub