Digital technology has been a key feature of professional accountancy for some time, but now artificial intelligence (AI) is increasingly being applied alongside.
Understanding digital tools and AI is one thing, but interpreting the insights and indicators that they throw out is on another level. Decision-makers need to be able to understand what AI tools are prompting them to do.
Leaders might see the systems as offering up best practice in terms of governance, ethics, risk and the impact on human capital. But the algorithms may give rise to other factors that require monitoring and interpreting.
Human intuition is a double-edged sword; while it brings perspective, it can also introduce unconscious bias
For example, human intervention is needed to check for built-in bias in algorithms; to avoid ‘decision drift’ that may occur from regular use of the systems; to identify the right questions to ask of the results; to understand the work the outputs will result in; and to bring to bear professional intuition and scepticism for greater accuracy and predictability.
Who’s in charge?
There is a tension between the implementation of artificial intelligence, robots and algorithms, and the continuous evolving decision-making abilities of accountants as they reconsider their roles. Do they allow the tech to take over the mundane, transaction-level activity and transform themselves into business decision-makers? Or is there a more progressive way, where the machines can take some of the decisions? Do the systems work for accountants or accountants work for the systems?
This in turn raises the issue of systems ethics vs professional ethics. AI systems cannot get a feel for the business and the market it operates in – the strategic issues, objectives and context. A level of judgment, expertise, influence, fairness, professionalism – in other words, an ethical lens – is required in strategic decision-making that is beyond the abilities of even the most sophisticated AI.
Double-edged sword
But human intuition is a double-edged sword; while it brings perspective, it can also introduce unconscious bias. And this bias can also be programmed unconsciously into systems.
In fact, the idea of building completely non-biased systems seems impossible, as they inevitably incorporate the rules, regulations, laws, and ethical and governance principles and behaviours that are set by humans and that influence decision-making.
It is vital, therefore, that professionals strive to ensure their ‘perspective’ or ‘context’ is honest and unbiased, otherwise the perception will be that AI is more reliable than human judgment.
And as the technology appears more honest, accurate and reliable, there may be a tendency for decision-drift. That is, the more the system is seen as being right, the more the humans might allow decisions to be deferred as they present hard human choices.
Managing bias
AI is the execution of calculative decisions made by computers that are an imitation of human intelligence. The modelling of human decisions involves the deep learning of prediction models – that is, the ‘neural network’ technology learns using training data that is inputted into the model. The system learns from previous decisions, and this can be a problem when factors change that the system has not learned about.
Neural networks require access to all available datasets. Often, they are trained on datasets that are inherently prejudiced. When the prejudicial systems are connected, the bias gets amplified. How can you be sure, when outsourcing your decision-making capabilities to an intelligent AI system, that you are bridging significant gaps that professionals couldn’t do better?
Regulatory move
The European Commission has unveiled its first ever AI regulatory framework. This aims to introduce a policy to protect the safety and fundamental rights of citizens from the decisions made by high-risk AI systems, and impose requirements on organisations that use such systems. We believe that these organisations should audit their AI systems for bias as part of a data-quality assurance programme, or face sanctions.
AI is growing in popularity, but its true value will only be realised if human contributions to decision-making are also factored in.