Earlier this year, a letter signed by hundreds of tech executives, including Elon Musk, owner of X (formerly Twitter), and Apple co-founder Steve Wozniak, proposed a six-month pause on research into artificial intelligence (AI), due to concern at the ‘profound risk to society and humanity’. The letter came as a surprise to many as it was signed by some of the very people helping to expand the reach of AI; Musk, for example, co-founded OpenAI, the company behind the popular generative AI app, ChatGPT.
AI is a doubled-edged sword. While we can all see the benefits it offers – faster, more accurate solutions, greater efficiency and lower costs – we are also aware of the potentially ‘catastrophic’ effects that could arise from the misuse or abuse of its capabilities.
Like many people, I have pondered what the impacts of AI might be on the order of things in the world, and the effects on culture and how humans relate to one another. Given its potential for both good and bad, I wake up on some days supporting AI research and on others hoping it will be discouraged in the way nuclear weapons are.
We have to find ways to add more value to our work than AI is capable of
Keep in check
Of course, this is not the first time we’ve been faced with such a dilemma. Certain research and practices in medicine and agriculture have elicited similar bipolar arguments: genetic modification technology, for example. In finance, cryptocurrencies allow cheaper and faster money transfers, but are also attractive to criminals.
Yet humans have continued to use these technologies, enjoying the benefits while keeping the potential for misuse in check through regulation. This should be the thinking towards AI.
The case for regulation becomes stronger in light of the immediacy of its effects. ChatGPT is a case in point. While many students were elated at the possibility of getting the app to write their essays, their teachers were not so keen. Cue plagiarism checkers.
Governments need taxes, and people with jobs pay taxes; AI does not
Undoubtedly and more worryingly, with greater use of AI, jobs are at risk; legal assistants and junior lawyers, writers, secretaries and others, including accountants, may find their activities taken over.
As a profession, we have to find ways to add more value to our work, to offer more than AI is capable of. This is why the ACCA syllabus is paying more attention to the decision-making aspects of the role of the accountant than the number-crunching aspect. Fortunately, ACCA also provides several opportunities for members to update their skills.
Regulation must consider what trade-offs would ensure that the human race enjoys the benefits of AI but keeps the jobs that are crucial to maintaining the fabric of society. And governments have an incentive to do this: they need taxes, and people with jobs pay taxes. AI does not.
Mauritius, Egypt and Nigeria have launched national AI strategies
In Africa, governments have made some efforts to explore how AI could benefit the continent. In August 2021, the African Union Development Agency, AUDA-NEPAD, released a report, AI for Africa: Artificial Intelligence for Africa’s Socio-Economic Development, in which it detailed the need for a continent-wide strategy.
Individual African countries are approaching regulation from one or more of three directions: legislation, strategy development and policy formulation. Mauritius, Egypt and Nigeria, for example, have launched national AI strategies; Kenya has created a national task force; and Botswana has a policy to encourage research and build talent in AI. Rwanda, meanwhile, has established a national technology centre with a focus on AI. More such initiatives are expected across the continent.
Regulation always trails innovation but now is the time to create regulatory structures; AI has run far enough for these to become imperative.