What do 19 October 1987 and 29 August 1997 have in common? Each is significant in the story of artificial intelligence (AI), and in the reality and the myth of what it means for all of us.

More mature members with long memories might recognise the first date as Black Monday. Although AI is thought of as a recent phenomenon, computer-based models were among the main players in that traumatic and tragic episode of 36 years ago.

Although over-valued stocks and the US trade deficit were in part to blame (and some sources cite military movements in the Middle East and losses on the UK insurance markets), it was the program trading that really sent the world economy into a spin.

An unforeseen coding error in Hong Kong meant that when shares hit a certain price, an automatic and unstoppable ‘sell’ order kicked in across automated trading systems. The surge of auto-selling spread west from Asia, triggering a price landslide that swept through Europe and swallowed up Wall Street. Trillions of dollars of value were wiped out – bankrupting businesses, destroying individuals and hollowing out pension funds.


Joseph Owolabi is ACCA president

We weren’t asleep at the wheel. We weren’t even at the wheel

It was the greatest failure of ethics, control and systems in the history of business. We weren’t asleep at the wheel. We weren’t even at the wheel. And we didn’t learn.

Enslaved by robots

Sci-fi fans might know the second date. In the Terminator movies, it was on 29 August 1997 (‘judgment day’) that a military AI system called Skynet became self-aware, removed people from control and launched a global nuclear war. It destroyed most of humanity, and survivors were enslaved by their new robot overlords.

Just because we can delegate all our decision-making to bots or doesn’t mean we should

That scenario, happily, remains a Hollywood confection and belongs squarely in the world of make-believe. But it highlights that the fear of AI exerts a powerful hold over people’s imaginations.

Both these stories, in different ways, remind us that human oversight remains critical in any AI application. Just because we can delegate all our decision-making to bots or automated processes doesn’t mean that we should.

Until now the regulation of AI has been patchy, inadequate and lagged behind advances in what the technology can do. It’s why I am pleased that ACCA is leading on the issue, along with our colleagues from EY, in a response to a UK government white paper on AI regulation.

Our joint report, Building the foundations for trusted artificial intelligence, is a significant contribution to the rising calls for better governance in this sensitive and important sector.

After all, we can’t wait for Arnie to return from the future to rescue us. We have to do this ourselves.