Author

Liz Fisher, journalist

1
unit

CPD

Studying this article and answering the related questions can count towards your verifiable CPD if you are following the unit route to CPD, and the content is relevant to your learning and development needs. One hour of learning equates to one unit of CPD.
Multiple-choice questions

Accountancy professionals play an essential role in creating and maintaining trust in capital markets. Ethics and trust are a core priority for the profession. So, what impact does the growth in the use of artificial intelligence (AI) and AI tools and processes mean for this imperative?

According to the ACCA report Enabling Trust in an AI-enhanced World, people should be able to trust in the expertise and ethics of a professional so that they don’t need to verify the details themselves.

AI raises significant questions around the use of data, accountability and reliability

‘Just as we trust doctors to take care of our health without understanding the intricacies of medicine,’ says the report, ‘businesses, regulators and capital markets trust accountants and auditors to ensure the integrity of financial statements without scrutinising every figure themselves.’

New dynamics

The report, the first in a series of AI Monitor publications from ACCA that will look at pressing AI challenges from the perspective of the accountancy profession, examines the new dynamics that AI introduces into traditional trust mechanisms, and sets out why AI poses challenges to trust.

While AI has undoubted benefits – through enhanced processes and improved efficiency – there are significant questions around the use of data, accountability and reliability, which could have a profound impact on trust. Common issues that emerge include cloudy decision-making due to the complexity of AI models (outputs that cannot be adequately explained or justified are less likely to be trusted), and bias and errors.

Accountancy professionals should view trust ‘as a socio-technical challenge’

The report points out that the risks surrounding the use of AI in the profession will vary, as will the techniques used to mitigate those risks (and in many cases, it adds, existing practices may provide enough oversight). In analysing these risks, the report looks at the two interlinked elements: the operation of specific technologies or tools; and human interaction, including how AI-derived information is employed by the user.

Keeping accountable

On the second point, for example, over-reliance on AI tools and the reduced application of human judgment could blur the lines of accountability.

The report argues that accountancy and finance professionals should view trust ‘as a socio-technical challenge’ that requires a combination of sound governance with internal control frameworks, and technical practices (such as machine-learning operations) in critical uses.

‘Trust is ultimately rooted in how people work together,’ it says, ‘but we build mechanisms to help us sustain trust in complex and uncertain environments.’

The complexities and challenges of AI in the profession will vary according to how and where it is used, and that should influence mitigating actions. The report argues, for example, that where AI is used for compliance, reporting or the distribution of goods and services but could potentially breach regulations, management should prioritise making its use explainable and interpretable.

In other cases, it adds, ‘the speed and impact of AI on decision-making under conditions of uncertainty may be more important than achieving the highest levels of accuracy’.

‘The speed and impact of AI on decision-making may be more important than accuracy’

Appropriate governance

The report goes on to set out the steps that organisations can take to support employees and stakeholders as AI becomes more prevalent.

Governance mechanisms, it says, should reflect the actual use of AI across the organisation, and management should consider governance frameworks (including for the procurement, development and use of AI systems, and a solid framework of data governance) to support implementation.

There also need to be clear policies around validation of AI models (to ensure performance remains reliable over time), and detailed audit trails and decision logs should be kept so that AI outcomes can be monitored and examined. These steps will all help to promote trust.

In addition, the report looks at the role of machine learning operations (MLOps), which can embed governance standards into the implementation and running of AI systems. ‘Essentially,’ it says, ‘MLOps can provide a technical backbone in critical use cases where trust may give way to requirements for additional verification.’

Monitoring dashboards that track AI performance can alert risk management teams to potential issues

In the context of accountancy and finance this may include, for example, maintaining comprehensive data and version histories to enable the results of AI models to be traced and reproduced. This would allow users to trace financial forecasts back to the exact AI model version and training data that generated it.

Monitoring dashboards that track AI model performance and the fairness of metrics can also help to alert risk management teams to potential issues. ‘CFOs and senior accountancy and finance professionals may not need to understand all the technical details of MLOps,’ says the report, ‘but they should be aware of the governance and trust implications as well as key metrics for review when required.’

The role of finance professionals in the age of AI is to focus on the outcomes driven by technology

Where to start

The report recommends initial steps for senior leaders, as AI governance and risk management become more frequent topics for the boardroom.

Ultimately, says the report, the role of finance professionals in the age of AI is to focus on the outcomes driven by technology, rather than solely on the outputs: ’The true value lies in understanding how these outputs inform decisions and actions that drive business outcomes.’

AI Monitor series

Look out for upcoming editions in the series, which will explore AI issues relating to talent, risk and controls, data strategies and sustainability.

Advertisement