Author

Rachael Johnson, global head of risk management and corporate governance, Policy & Insights, ACCA

From romance scams to multimillion-pound deepfake video conference frauds, criminal methodology is evolving dramatically thanks to AI advancements.

The well-documented US$25m Hong Kong case illustrates the scale of the deception: a finance professional at a leading engineering firm joined what appeared to be a legitimate Teams call with senior colleagues, including the CFO. All the participants were deepfakes. The fraudster authorised 15 transactions totalling HK$200m before the scheme was detected.

The impact of procurement fraud is often underestimated because of its prevalence

This is not an isolated incident. Our research supporting a recent report, Combatting fraud in a perfect storm, is based on a survey of more than 2,000 cybersecurity, compliance auditing and risk professionals, as well as 31 roundtable events worldwide and one-to-one interviews with experts in the field, and reveals a startling level of fraud activity.

Trigger warning

In the report we highlight that fraud is no longer a collection of separate risks, but an interconnected storm system. Cyber tools are amplifying all types of fraud and a single weak link in a value chain can trigger cascading failures across multiple entities. The convergence of Web 3.0, accelerating AI and sophisticated global crime networks creates threats that outpace most organisations’ ability to respond.

We set out a ‘prevalence versus materiality’ axis, designed to help professionals avoid simplistic assumptions about fraud. This shows, for example, that cyber fraud consistently rates as far more material because of its unpredictable, catastrophic potential – but the impact of procurement fraud is often underestimated because of its prevalence. The axis illustrates how fraud manifests differently in different regions and sectors.

Fraud is being amplified by a lack of ethical leadership and appropriate response

‘Organisations must navigate sector-specific terrains shaped by multiple pressures and idiosyncratic vulnerabilities,’ says the report.

The sophistication of modern AI-generated content defeats conventional verification. When asked if they could spot high-quality AI avatars of people they know on Teams calls, even seasoned risk professionals expressed doubt.

‘AI is destroying the barrier between truth and fiction, and the implication for finance, audit and risk professionals is profound,’ adds Dr Roger Miles, a behavioural scientist and former auditor, in the report. ‘We’ve reached a watershed moment where we’ve got to deeply question the truth of the bookkeeping in front of us.’

Fighting back

For criminals, AI has brought the opportunity to commit fraud at scale, with technology that can adapt in real time. It gives the potential for massive returns for minimal investment. So where does that leave organisations?

The same technology that is driving a worldwide increase in fraud is also leading the fightback, and AI-driven solutions are helping with instant scam detection, image analysis and pattern recognition. The UK government, for example, recovered £480m from fraud between April 2024 and April 2025 – the largest annual recovery ever.

Every fraudulent transaction leaves a trail of harm

But while AI solutions are helping detect fraud, the response can leave much to be desired. The uncomfortable truth is that fraud is being amplified not only by technology, but in cultural terms by a lack of ethical leadership, accountability and appropriate response.

In the report we also warn of governance gaps that give fraudsters and criminals too much opportunity. Fraud is everyone’s issue but no one’s job. While the survey shows that boards believe they should own oversight, less than 30% say this happens in practice.

Our study found that organisations are generating more fraud alerts than they can process, eroding confidence that reports lead to action. Counting cases is not the same as managing risk.  Just 57% of survey respondents believe that their own organisation actively looks for fraud, it adds, which is why ‘common’ risks such as procurement erode resources steadily while lower frequency risks such as cyber can prove existential.

Critical link

Accountants and auditors provide the critical link between fraud risk and board comprehension of fraud data, but it is clear from the research that the sheer volume of fraud reporting can be overwhelming. This highlights the urgent need for all our professions to provide concise, causal, decision-oriented analysis rather than voluminous compliance reports.

We are in an AI arms race between fraudsters and defenders

As ACCA chief executive Helen Brand points out in her introduction to the report, every fraudulent transaction leaves a trail of harm to people, businesses and society at large. And the losses are staggering – trillions of dollars globally. We are in an AI arms race between fraudsters and defenders. Those who treat AI as optional – or as a problem to avoid – will be increasingly vulnerable.

The choice is not whether to engage with AI – criminals have already made that decision. The choice is how to deploy it intelligently, ethically and effectively before the next deepfake call costs your organisation its reputation, its resources, or both.

More information

Listen to the podcast ‘Risk culture: AI, avatars and the war on fraud’, featuring ACCA’s Rachael Johnson and David Clarke, a former City of London Police officer and co-founder of Guildhawk

Advertisement