Author

Peter Reilly is a member of the Bailey Network, a group of former analysts and investors who are now consulting in the reporting space

From a very early age, I have been fascinated with understanding how things work. Back in a largely mechanical world, this was usually an achievable goal. Disassembling a device would often reveal its secrets, although reassembly was sometimes a little more challenging.

As technology has evolved from mechanical to electrical, this goal has become less and less realistic. With many devices now having some sort of embedded software, inner workings are increasingly opaque. The inputs and outputs are still visible, but the bit in between is often a mystery – to me, at least.

Broadly speaking, these ‘mysteries’ come in two types: conventional software and machine learning. Conventional software may be complex, but it is (in theory) always possible to understand how it works.

Machine learning by contrast is just another way of saying ‘black box’. It is inherently impossible to understand a machine-learning algorithm, as the software is both written by the machine and will evolve over time. Some of my favourite science fiction books revolve around computers that have evolved in ways not envisaged by their designers. Sometimes this ends well; more often it doesn’t.

In my mind, AI is a new word for an old concept: black box

All this came to mind recently when I was listening to a presentation by one of the UK’s most senior accountants. He was talking about the use of artificial intelligence (AI) in auditing. In his telling, AI will be an exciting new technology that will improve quality and reduce costs. In my mind, AI is a new word for an old concept: black box.

Stable and predictable

Some years ago, I was talking to the head of one of the world’s largest makers of medical imaging equipment. We were talking about the use of AI to analyse scans and find things that a human doctor might miss. To my surprise, he immediately raised a major regulatory hurdle: it is impossible to get approval for a medical device that relies on machine learning. The approval process requires that the machine behaves in a way that is stable and predictable. Similar rules apply to other fields such as avionics.

‘AI gave us this answer, but we have no idea how it reached that conclusion’ will not satisfy a regulator

When you think about it, this is entirely understandable. I do not want to fly in a plane that relies on a computer that is constantly rewriting its software. I do not want the crash investigator to be unable to establish why the plane flew into a mountainside.

Accounting may not be life and death like medicine or flying, but there are still parallels. The whole concept of auditing relies on impartial, evidence-based analysis of factual information. There will be subjective judgments, but these too will be based on assumptions that can be analysed and tested. It must be possible for a new pair of eyes to understand and replicate the steps taken by another auditor.

I do expect many professions, including accounting, to make more and more use of analytical software, but I think the potential for AI is massively over-hyped. ‘The AI gave us this answer, but we have no idea how it reached that conclusion’ is not an answer that will ever satisfy a regulator. I hope.

No one is accountable

There is another problem that the imaging CEO raised: whom do you sue when AI goes wrong? Do you sue the company who made the scanner? Do you sue the company that wrote the original program? Do you sue the clinic? The only thing you can say for sure is that you can’t sue the AI itself.

AI is already having a major impact on unregulated activities, but I think its impact on auditing will evolve much more slowly. The issue with AI is that it leaves behind no audit trail.

More information

Advertisement