World chess champion Garry Kasparov is beaten by IBM's Deep Blue computer in 1997
Author

Kiara Taylor, journalist

Today, artificial intelligence (AI) is a ubiquitous part of our daily lives. Online product recommendations, customised music and video suggestions, customer service chatbots and smart digital assistants all rely on the technology, while it is increasingly applied across industries from finance and healthcare to retail and logistics.

But while you might be forgiven for thinking that this is a recent development, AI is far from a new phenomenon. Humans have long been fascinated by the idea of intelligent machines and automated beings; early traces of the concept can be seen in fictional works such as Mary Shelley’s 1817 novel Frankenstein and the Tin Man from the 1939 film The Wizard of Oz (based on Frank Baum’s 1900 novel, The Wonderful Wizard of Oz).

Turing said that, since humans make decisions based on available information, machines should be able to do the same

The idea only really took off, however, when British polymath Alan Turing wrote a paper suggesting that, since humans made decisions based on reason and available information, it was within the realms of possibility for machines to do the same thing. AI research then gained momentum, with early pioneer John McCarthy developing Lisp, the first AI programming language, in the late 1950s.

Despite the early breakthroughs, though, the development of AI programs was hindered by technological and cost limitations. Scepticism set in, leading to a reduction in investments and interest in the field.

Informed decisions

The 1970s saw a revival of interest, with the development of specialised programs designed to emulate the decision-making capabilities of humans. Specific scenarios would be programmed into systems, which were then taught how to respond to each. This process enabled programs to make informed decisions based on predefined rules and data.

Mycin, for example, a medical diagnosis system developed at Stanford University in California in 1972, was capable of diagnosing infectious diseases with the same accuracy level as a human expert.

Increased computational power and the availability of massive datasets pushed the AI agenda forward

Despite their promise, however, these systems failed to achieve widespread adoption due to their rigid nature, which made it difficult to scale them to more complex and dynamic real-world applications.

A key turning point came in the 1990s, when machine learning algorithms and the introduction of neural networks enabled AI systems to learn from data. This was famously showcased in 1997, when IBM’s Deep Blue defeated chess grandmaster Garry Kasparov, demonstrating the potential of AI in strategic decision-making tasks.

Going into the 21st century, increased computational power and the availability of massive datasets pushed the AI agenda forward, enabling breakthroughs in computer vision, speech recognition, natural-language processing (NLP) and intelligent agents.

Game on

Since 2010, advances have accelerated at a remarkable rate, due primarily to the development of deep-learning techniques. NLP in particular has enabled machines to understand and generate human language more effectively.

Developments have captured the wider public imagination through the medium of television. In 2011, IBM again took to a public forum, pitting its Watson computer against two champion contestants on US general knowledge quiz show Jeopardy! The computer trounced the top quizzers.

Five years later, AlphaGo, Google DeepMind’s AI-powered system, defeated the world champion Go player Lee Sedol, showcasing the power of deep learning in complex strategic games.

Just 29% of companies have implemented AI, with cost and security being major concerns

Meanwhile, the introduction of generative pre-trained transformer (GPT) models in 2018 enabled AI systems to generate human-like text, opening up possibilities in content creation and natural language generation. Two years later, OpenAI’s DALL-E demonstrated AI’s ability to create images from textual descriptions, while the release of Stable Diffusion in 2021 marked advancements in image synthesis and editing, showing the potential of AI in creative fields.

What next?

We can expect more robust and efficient systems capable of handling an even wider range of tasks. Deep and reinforcement learning will improve decision-making processes and autonomous systems, revolutionising industries such as transportation and logistics.

Despite rapid developments, the use of AI in the finance profession has some way to go. Recent research in the UK shows that just 29% of companies have implemented AI, with cost, time constraints and security concerns being major factors in their reluctance. However, a report by Thomson Reuters found that accounting and legal professionals are optimistic – if still uncertain –about generative AI, with 78% of respondents believing that tools such as ChatGPT can enhance their work.

It’s vital that humans ensure good governance remains at the heart of AI

One of the most captivating and contentious possibilities in the future evolution of AI is the possibility of achieving superintelligence. This refers to an AI system that surpasses the cognitive capabilities of humans in every aspect, including problem-solving, learning, creativity and social understanding.

The potential for superintelligence highlights the importance of ensuring that humans maintain their central role in AI’s development and ensure that good governance remains at its heart.

This has led the UK government to release a white paper, A pro-innovation approach to AI regulation, which considers how the technology can be embraced while providing a framework that ensures that risks are identified and addressed. In response, ACCA and EY have produced a report emphasising the need for policymakers to act quickly to refine and implement a regulatory framework.

While it’s hard to predict the direction AI will take, there is no doubt that it will revolutionise how companies operate. AI-driven data analytics will play a critical role in driving business intelligence. By processing vast amounts of data, AI will uncover valuable insights, enabling businesses to make data-driven decisions, optimise operations and predict market trends.

Dystopian future

Read the AB article about the potential dangers of AI

See ACCA’s Quick guide to AI

Advertisement