
Artificial intelligence (AI) is becoming increasingly embedded in business operations, as is the requirement for auditors to understand and address the regulatory, reputational and business risks that these technologies pose.
Without adequate controls, organisations that adopt AI and generative AI could be exposed to risks such as bias, a lack of transparency, explanation or accountability, and a failure to comply with existing laws.
Pivotal moment
And regulatory scrutiny is intensifying. For example, the European Union’s landmark AI Act began to take effect in February – including bans on AI systems that pose ‘unacceptable risk’ and the introduction of mandatory AI literacy obligations – marking a pivotal moment in regulatory compliance for organisations in the member states using AI technologies.
Responsible AI implementation ‘is founded on having a governance framework’
The act is the first major legislation to emphasise a risk-based approach, and creates an opportunity for auditors to help their clients take a responsible approach towards AI development and deployment. Those outside the EU could also take note.
Richard Jackson, EY’s global artificial intelligence assurance leader, says that the way auditors inject AI, especially generative AI, into how they deliver and perform their services is ‘fundamentally reshaping the profession’.
Auditors not only need to understand how AI is changing ‘our responsibilities as financial statement auditors’, but also ‘the expansion of demand that clients have around adjacencies such as the responsible AI agenda,’ Jackson says.
Strengthening governance
This is being driven by the changing regulatory landscape. The AI Act introduces a tiered framework that classifies AI systems based on their level of risk – from minimal to unacceptable.
High-risk systems, such as those used in biometric identification, credit scoring and critical infrastructure, will be subject to rigorous obligations around transparency, data governance, human oversight and risk management.
Jackson says responsible AI implementation, and compliance with the AI Act, ‘is founded on having a governance framework for AI within any organisation’. He underscores the importance of having an approach and mindset ‘that can be anchored on one of these frameworks before you can think about deploying the technology’.
Auditors must now treat AI systems as systems that need to be audited
This includes, for instance, determining how businesses are organised internally to know where and how AI is being used because AI is ‘incredibly democratised across organisations,’ he says.
‘You’ve rapidly moved from a world where a lot of technology innovations typically used to come through a very centralised IT environment, to the opposite extreme, where every employee has the ability to ideate and to innovate.’
Risk assessments
One of the first steps auditors should take is to conduct a thorough assessment of a client’s current AI systems and how they are using them, as one of the ‘foundational pieces’ of the AI Act is the requirement to create an inventory of all your uses of AI tech.
‘You can’t even begin to get your arms around all those areas of AI if you haven’t understood how that technology has been developed and released into the ecosystem,’ says Jackson. ‘What may seem like the simple task of creating an inventory for many organisations is actually one of the most complex challenges in this environment.’
Auditors must now treat AI systems as systems that need to be audited. This means going beyond basic IT checks to develop a structured, risk-based process for assessing the design, implementation and oversight of AI tools. This will entail going through a basis of classifying the usage under a risk framework to determine whether it meets some of those criteria or categorisations in the EU AI Act.
This can be everything from the use of software on an individual’s laptop or someone using OpenAI on a mobile device, all the way through to more of a centralised build of technology at the centre of the IT department.
The ability to understand how a system makes decisions is a crucial aspect of responsible AI
Operating high-risk AI systems requires an effective AI risk management system, logging capabilities and human oversight. Proper data governance must be applied to the data used for training, testing and validation, along with controls to ensure the cybersecurity, robustness and fairness of the system.
Jackson says auditors will need to understand some basic questions about whether it is an off-the-shelf usage of the technology, a foundational use that then gets built upon by the business function, or something customer-facing that is being bundled into a product or service.
‘All of those then go into that risk categorisation – and, again, it comes back to having that clear, very transparent framework to measure against,’ he adds.
Explainability
The ability to understand how an AI system makes decisions is a crucial aspect of responsible AI. The EU AI Act places a strong emphasis on transparency, enabling users to understand how to use AI systems, and on technical documentation, record keeping and data governance.
According to Jackson, the level of detail required is relative to the usage of the technology. For instance, for traditional AI that is used in a discrete or simple task and has a degree of repeatability to it, the need for explainability and transparency is relatively straightforward.
‘It probably has a lower risk to it, versus something where you’re trying to give the technology more autonomy,’ Jackson says.
Something like agentic AI, which is the ability to give the AI more autonomy to problem solve on behalf of the human, creates a challenge because ‘that degree of explainability becomes much more opaque’, Jackson explains.
But organisations are making great strides towards being able to ‘provide that breadcrumb trail of explainability’ by creating ‘more transparent logs of all the steps the technology has taken so that the human can start to interact with it more and create that level of confidence around the explainability of what it’s doing’.
More information
Join ACCA’s halfday webinar ‘Landing the AI opportunity’ live on 15 July or on demand
See ACCA’s AI resources including reports and learning materials