Author

James Walker FCCA is a non-executive director, board adviser and internal audit professional, and Charlotte Fallon Smith is founder of Fallon AI

AI governance often gets insufficient board attention, not because boards lack curiosity, but because the complexity and pace of tech evolution makes it hard to know where to dig.

There is currently a fundamental expectations gap. Although in theory boards set the governance framework, audit committees oversee the control environment, and internal audit provides independent assurance, a global EY survey of C-suite executives in 2025 found that while 72% had scaled AI, only a third had governance protocols in place. Meanwhile Gartner’s 2025 global survey found that 80% of non-executive directors believe their current board practices are inadequate to oversee AI.

If you can’t fully account for your AI system, governance has already failed

These are not edge cases. They are the norm. Risks from digital disruption and AI rose from fourth to third place in a 2026 survey of European internal auditors.

With risk rising, and governance failing to keep pace, what should boards be focusing on?

Control principles

Every control framework, regardless of technology, comes back to three fundamentals: completeness, accuracy and timeliness. AI systems operate at speed and scale, but they require the same verification as any operational process. However, there is one critical difference: traditional controls assume human decision-making at key points, but AI automates those decision points. So controls must account for speed, volume and opacity simultaneously.

Across organisations deploying AI systems, the pattern appears to be consistent: a successful implementation verifies data quality before deployment. Failed implementations layer AI on top of bad data, then act surprised when the system produces biased or incorrect outputs.

Boards need to ask: ‘Can you show me the completeness, accuracy and timeliness metrics for this system, and what triggers action when any one of them fails?’ An equally revealing question can be built around when those metrics last showed a problem, and how the organisation found out. If management cannot produce these metrics on request, the controls are not in place yet.

Accountability

Strong controls require clear ownership. With AI, accountability gets murky, fast. In larger organisations, IT will say it is a data problem, data will say it is not their problem, and leadership will say it is an IT problem. Nobody owns the end-to-end process – when something goes wrong, everyone points fingers and nobody accepts responsibility.

A 2024 survey of UK financial services firms by the Bank of England and the Financial Conduct Authority found that while 75% are already using AI in some form, 46% only partially understand the systems they have deployed. That partial understanding is an accountability gap as well as a knowledge gap. If the people deploying AI cannot fully account for it, and nobody else owns the question of whether it is working as intended, governance has already failed.

Vendor dependence makes this harder. In a field moving this fast, supplier lock-in creates real limitations, and when an organisation’s AI tools cannot keep pace with what people need, employees find their own solutions outside approved systems. This is a data risk as well as a productivity problem.

Someone senior needs the authority to press the kill-switch

Since human oversight cannot review every AI output, and AI cannot handle everything, AI practitioners apply a ‘human-in-the-loop’ design: defined intervention points where human judgment must be applied before the system proceeds. This should include clear points of handover between AI and human decision-making.

In some cases, organisations document oversight that looks credible on paper but fails completely in operation. Someone senior needs to own AI system performance end to end, with the authority to press the kill-switch.

Boards should also ask about the operating model itself. How many AI decisions are made each day, how many receive human review, and what happens when human intervention capacity is exceeded? If multiple people claim shared accountability, nobody is actually accountable. If human oversight is described as ‘continuous monitoring’ without specific manual intervention triggers, it is not real oversight.

Board learning

A global Deloitte study found that 66% of board directors have ‘limited to no knowledge’ of AI, with just 5% saying AI is incorporated into their business and operating plans. The organisations that get governance right have leaders who have done the mindset work first. They may not understand how the algorithm works, but they do know what AI decision-making means for accountability, oversight and control.

Boards need to have the AI literacy their strategy requires

On the other hand, sometimes boards ask the right opening questions but then move on. Dedicated learning – structured deep dives with real operational examples, scenario testing and genuine challenge of management’s AI strategy – is needed to know where the conversation should go next.

Boards should also ask whether their composition provides the AI literacy their strategy requires or whether they are governing a technology they do not collectively understand. If their learning on AI is limited to receiving management updates, they are not governing AI but observing it.

The opportunity

AI governance is about applying proven frameworks to systems that are faster, more opaque and harder to audit than what came before. The organisations that get this right build genuine competitive advantage.

The principles of sound governance – clear ownership, verified controls, meaningful oversight – do not become redundant because the technology has changed. They become more important than ever for systems that move faster and explain themselves less readily.

Effective boards will solve AI governance they way they have always solved hard governance problems: by asking better questions, demanding evidence and holding management accountable for the fundamentals.

More information

Advertisement