Author

Gavin Hinks, journalist

1
unit

CPD

Studying this article and answering the related questions can count towards your verifiable CPD if you are following the unit route to CPD, and the content is relevant to your learning and development needs. One hour of learning equates to one unit of CPD.
Multiple-choice questions

Artificial intelligence (AI) may have prompted concerns about a Terminator-style apocalypse in which robot overlords take over the world, but in Brussels regulators are focused on the more prosaic worries surrounding issues such as discrimination, privacy, transparency and cybersecurity.

Since 2020 officials have been crafting the EU Artificial Intelligence Act. In December 2023 the European Parliament and Council reached agreement on a text that forms a major turning point in the way AI and its use will be regulated and viewed by organisations across the private and public sector. The legislation was approved on 13 March 2024 and will be fully applicable 24 months after entry into force.

‘There must be a reasonable dose of being careful about what AI can do to harm society or individuals’

Kamal Bechkoum, an AI and cybersecurity professor at Abertay University in Scotland, says AI is even more complicated than any previous leap in software development and EU regulation will form a ‘significant milestone’.

‘AI is becoming part of our daily lives from the moment we open out eyes in the morning to the moment we go to bed and it affects everyone, no one is immune.’ But, he adds, ‘There must be a reasonable dose of being careful about what AI can do to harm society or individuals.’

Regulation develops

In Ireland, observers have watched as a nascent ecosystem of SMEs and multinationals focused on the new technology has emerged across the country.

In September 2023 ChatGPT, perhaps the world’s most well-known AI company, announced it would open a base in Dublin: its first within the EU. Indeed, Dublin’s new official tour guide will be powered by the company’s software.

Meanwhile, a recent study by Microsoft and Trinity College Dublin reveals that half of all Irish organisations – both public and private – are already using generative AI in some form, even though the tool remains in its early stage of development.

AI judged to present ‘unacceptable’ risk – that forms a threat to fundamental rights – will be banned

A hefty 27% of organisational leaders are using AI informally in their work, while 25% believe that their employees are doing so, too.

Transparency responsibility

For corporate users, or ‘deployers’, as the act calls companies adopting AI tools, there will be responsibilities for transparency about using and uses of AI, incident reporting, monitoring performance and making sure ‘input data’ is ‘relevant and sufficiently representative’ for the intended purpose.

Non-compliance with the rules can lead to fines ranging from €35m or 7% of global turnover to €7.5m or 1.5 % of turnover, depending on the infringement and size of the company.

But as Brian McElligott, a partner at law firm Mason, Hayes & Curran, points out, much depends on how AI is used. For example, if AI is used to index legal documents into different categories, that could qualify as limited risk while the same AI used to process loan forms and decide who is eligible for a mortgage could be considered high risk. ‘The concept of the law is to regulate the use, not the technology,’ he says.

‘Risk classifications may change over time as an organisation’s use of AI evolves’

Annex III of the act offers a survey of high-risk uses. The list is extensive but includes AI used for biometric identification and tools used by organisations to make employment decisions. It will come as no surprise that use by law enforcement or border agencies get their own mention in the annex.

Misclassification risk

The greater burden of compliance will fall to developers, or providers. Article 6 asks providers to self-assess whether their AI tools fall into one of four risk categories: unacceptable, high, limited, and minimal or no risk.

As the categories imply, AI judged to present ‘unacceptable’ risk – that forms a threat to fundamental rights – will be banned. Limited-, minimal- or no-risk systems will attract light-touch rules, but systems judged to be ‘high risk’ will face strict regulation.

But there is a pitfall here for the unwary developer, according to Keith Power, risk partner for PwC in Ireland. ‘The risk of misclassification is high as risk classifications may change over time as an organisation’s use of AI evolves,’ he says. ‘This necessitates the implementation of appropriate ongoing governance and control procedures to maintain compliance.’

Standards will define the nitty-gritty of how to comply and are not expected until 2025

Another worry is the absence of a full definition of ‘development’ in the act, adds Power. The original developer may remain clear, but without a narrowly defined term it is possible ‘deployers’ could be caught if they, say, ‘fine tune’ an algorithm. Becoming a ‘provider’ in this way would attract additional compliance obligations, Power says.

Developers also face an organisational challenge. AI placed into devices used by medical or pharmaceutical companies, or even engineers such as lift-makers, will be used by companies that operate in regulated industries and have ready-made compliance expertise and processes.

To date, developers have escaped regulation and will have to build their compliance structures and knowledge from scratch; this, according to McElligott, is a process that may look ‘daunting’ for many providers.

Standard approach

While the legislation sets principles, though detailed in many respects, its related standards will define the nitty-gritty of how to comply. These are not expected until 2025, which, according to McElligott, is ‘a big problem’ if you’re planning to launch a product: ‘That’s a seismic shift; you literally cannot launch until you have the certification done.’

AI organisations might want to consider introducing a ‘responsible’ AI framework

That, McElligott adds, places a premium on starting preparation now. Developers and users should be fully aware of the contents of the AI Act and its territorial scope, and in particular.of Annex III categories. ‘Map your AI roadmap onto them and see which is inside and which is outside the “high risk”,’ he says.

Power adds that AI organisations might want to consider going beyond technical compliance with the act, and consider introducing a ‘responsible’ AI framework. The benefits, he says, ‘include future proofing against likely further regulatory changes and facilitating simultaneous compliance with regulations across multiple jurisdictions.’

More information

Read AB’s AI special edition

Take a look at ACCA’s AI resource hub

Read ACCA’s Quick Guide to AI

Advertisement