Author

Dr Patrick Buckley, Dr Brendan McCarthy and Professor Elaine Doyle, Kemmy Business School, University of Limerick

1
unit

CPD

Studying this article and answering the related questions can count towards your verifiable CPD if you are following the unit route to CPD, and the content is relevant to your learning and development needs. One hour of learning equates to one unit of CPD.
Multiple-choice questions

In our previous AB article, we examined how artificial intelligence (AI) is poised to change traditional career pathways into and through the profession, influencing the skills and practice structures required in future. This article explores how AI is also very likely to change some of the tasks that accountants undertake.

Developing, training and operating large language models (LLMs) like ChatGPT requires enormous resources, so most organisations will use systems provided by third-party providers, who will control both the software and the data. Current AI systems are opaque, even to their developers, making them fundamentally different to other forms of software. They are vulnerable to factual errors, logical errors and hallucinations; the briefest web search will reveal multiple reports of LLMs performing the addition of two numbers incorrectly.

 

Accountants will need to adjust the rules the AI systems use

Usually when an error is observed in other forms of software, it can be replicated, traced to a particular part of the source code and corrected. With current LLMs, there is no linear set of steps leading from a set of inputs to a particular output, so problems must be fixed after the output is generated, on a case-by-case approach. Increased computational power, memory, and data may result in LLMs that generate fewer errors in the future, but developing general solutions to address these issues remains challenging.

Work in collaboration

As AI technology is employed to build a set of accounts, accountants will need to work in collaboration with the technology, customising the large, general-purpose AI systems provided by third parties to operate in specific organisational contexts, effectively altering the behaviour of an AI system from its baseline operation. As legislation and accounting rules change, accountants will need to adjust the rules the AI systems use to process financial data.

Skills such as prompt engineering – writing the instructions that AI systems will follow – will become vital. Accounting will become less about constructing a balance sheet and more about describing how to create one.

Verification shift

Increased automation will also orientate an accountant’s role towards the verification of accounts created by AI systems. This change from a discipline largely focused on construction to one more concerned with verification raises several interesting questions.

We ask human accountants to be perfect, but in practice imperfection must be accepted

Imagine an organisation using an AI system that generates a set of financial accounts from raw financial data. One time in 10,000, a non-obvious but financially significant error occurs. Is this a good reason to reject the use of AI systems in creating accounts? After all, we ask human accountants to be perfect, but in practice human imperfection must be accepted and managed.

It seems pragmatic to accept a certain degree of error from these systems, particularly if they make fewer mistakes than humans at an infinitesimal fraction of the time and cost. Nonetheless, explicitly acknowledging that accounts may be incorrect, however unlikely, marks a significant change.

A question of error

Perhaps we consider the problem is the error rate. We wouldn’t accept a system that makes an error one time in 10,000 but will accept one that makes an error one time in a million. This may be a pragmatic response to the imperfection of AI systems but raises the question of how we establish the acceptable error rate and whether it should it be set by the individual organisation or by accounting standards. Indeed, the only organisations that are likely to have enough data to establish an acceptable error rate are the ones providing the systems.

There is a risk that accountants become moral ‘crumple zones’

We may decide the solution is to have a human accountant verify the accounts after creation. However, if we ask them to review only a sample, we can’t be sure the accounts are error free. If we ask them to verify the entire process, this is likely to take as long as creating accounts manually, obviating any efficiency gained by automation.

Moral hazards

More generally, if accounting becomes a verification-orientated discipline, significant new moral hazards arise.

If an accountant’s role becomes verifying accounts created by third-party autonomous and opaque systems, there is a risk that accountants become moral ‘crumple zones’ – like the part of a car frame designed to collapse in a collision. By verifying accounts, they take responsibility for their accuracy, absolving the third-party creators of blame.

Given the speed differential between AI systems creating accounts and humans verifying them, constant pressure to increase the speed of verification, with the attendant risks in terms of care and attention, seems inevitable.

Societal challenges

Part of the role of accounting involves applying rules to data to create financial models that serve the needs of stakeholders. AI systems that surpass humans in terms of precision and speed in analysing and implementing accounting rules therefore raise questions about the role of the profession.

Automating the constructive aspects of the accountant’s role will create space

 

Other professions have faced similar challenges in the past. For example, rather than indexing and storing data, librarians today assist people to find information. The disciplinary knowledge base of understanding how to manage information has been successfully adapted to address new societal challenges such as information overload and misinformation.

Accounting may need to evolve similarly, and automating the constructive aspects of the role will create space. One potential future is that the profession moves increasingly towards combining accounting skills with judgment-orientated skills such as critical thinking and value prioritisation.

In this, the goal would be to create a profession that is less focused on describing how organisations are today, and more concerned with using the analytical and modelling knowledge base of the discipline to engineer organisations that demonstrate sustainable and ethical values.

More information

Read AB's Big Tech special edition

Take a look at ACCA’s Quick guide to AI

Advertisement