Author

Liz Fisher, journalist

As the use of artificial intelligence (AI) and machine learning-based tools continues to extend at pace throughout the workplace, fundamental questions remain: How safe is this technology? How far can it develop? And will it help us, or just remove the need to employ us?

A new ACCA report, The smart alliance: accounting expertise meets machine intelligence, seeks to understand how extensively AI is already used by accounting professionals, and looks at the risks and benefits of the technology. Throughout, it stresses that AI is ‘seen as a tool to enhance the skills of accounting professionals’, rather than replace them – and that AI is helping the profession reimagine the role of finance professionals by allowing them to focus on interpretation, strategy and high-level decision-making.

GenAI models have limitations that bring considerable risks

Risky business

The report covers the AI family of technologies: machine learning, natural language processing and generative AI (GenAI). The capabilities of GenAI in particular are still at a relatively early stage of development, and these models have only proven effective so far in translating language, generating or summarising text, and answering questions. The potential and use cases of GenAI will continue to expand but the report warns that at this stage, the models have limitations that bring considerable risks.

These are probabilistic models that are not reliably accurate, and which are unable to recognise the truth or objective facts. This means they are subject to ‘hallucinations’ as a result of flawed data, a lack of context, misinterpretation of data or poor reasoning.

‘There may well be valuable uses for GenAI within strict and limited circumstances,’ says the report, ‘but not without serious consideration to the potential for inaccuracies, biases or misleading references to proliferate and cause harm or reputational damage.’

Uncertainty about choosing the right AI solution is seen as the biggest challenge

This reinforces the report’s assertion that it is essential that the use case for AI tools, and GenAI in particular, are right – and it stresses that AI is not always the solution. There are ‘very practical limitations and implications’ of using predictive methods, says the report: ‘Good predictions may not lead to good decisions. The context of the decision and its potential consequences are crucial.’

What to choose

Indeed, the survey carried out as part of the study found that uncertainty about choosing the right AI solution is seen as the biggest challenge for organisations adopting AI, along with integration challenges with existing systems; 29% of respondents identified these as their biggest concerns.

AI is being adopted at a rapid pace, across a wide range of applications

The survey also shows that AI is being adopted at a rapid pace, across a wide range of applications. Data analysis and reporting is the most popular area for AI adoption, but the survey shows an extensive selection of use cases.

Predictably, large corporate firms have been first to adopt AI, with more than 40% already using AI for data analysis and reporting. So far, less than 30% of sole practitioners or smaller practices have followed suit. But it is clear from the survey that organisations of all sizes are committed to investing in AI. Among the largest organisations, a quarter invested more than US$0.5m in the past year alone.

Finance role

The report highlights the important role that finance departments are playing in shaping organisations’ AI and data strategies. In 59% of Big Four firms, and 68% of small and medium-sized firms, for example, finance departments are actively involved in an advisory role. This role is crucial, says the report: ‘Successful AI implementation is not just about the technology; it’s about alignment across the organisation and tying technological capabilities to financial goals and regulatory requirements.’

However, it is clear that most respondents are not confident that the risk and control measures for AI within their organisation are good enough. The report notes that there is a ‘growing awareness of AI risks’ across the industry, but many organisations are still in the early stages of developing risk management strategies and the culture to support them.

Best practice

Informed by detailed case studies, the report identifies an extensive list of best practices in AI adoption. This includes, for example, beginning with well-defined, high-impact use cases where AI can provide clear value, and running AI systems in parallel with existing processes to validate performance, before scaling up.

On the question of governance and ethics, the report recommends that clear governance structures and policies are based on actual uses rather than generalised risks. Organisations should have in place appropriate policies and standards, such as a clear scope for the use of AI, and adequate control of data inputs but, the report adds, one of the most effective safeguards is the individual.

‘The human user must always use their judgment and remain accountable for decisions’

‘Personal integrity remains at the forefront,’ it says. ‘This means that the human user must always use their judgment and remain accountable for decisions made using AI-generated content.’

Overall, the report notes that the profession is ‘on the cusp of significant change, driven by advancing AI technologies and evolving business needs’. The level of investment by firms, it adds, ‘reflects a growing recognition of AI’s potential to drive real business value in the accounting sphere. By embracing AI technologies thoughtfully and strategically, the profession can enhance its capabilities, deliver greater value to clients, and play an even more crucial role in guiding business decision-making.’

More information

ACCA’S annual virtual conference Accounting for the Future, from 26-28 November, has a range of sessions looking at the impact of technology on the finance profession. Register to attend live or on demand, and earn up to 21 CPD units.

Advertisement