Author

Ellis Ng, journalist

Last year was marked by major advances in the development of generative AI (Gen AI), with breakthroughs in large language models and natural language processing. The advent of conversational AI systems like OpenAI’s ChatGPT, Microsoft’s Bing AI and Google’s Bard AI generated worldwide wonder and intrigue, as well as some concern.

Asia’s economies have taken different approaches to regulating Gen AI

But as businesses rush to embrace these technologies for competitive gains, governments are watching warily. Regulators are most concerned about potential risks relating to data privacy, security, accountability and safety issues. Multiple international initiatives have been introduced to address these challenges.

Hard and soft approaches

Asia’s most developed economies have taken drastically different approaches to regulating Gen AI, which range from China’s specific and prescriptive approach to the soft-law approach taken by Japan.

In July 2023, China issued the first set of rules specifically targeting Gen AI technologies. This marked Beijing taking an early regulatory lead on this front.

For its part, South Korea has proposed a comprehensive ‘AI Act’ master bill, not unlike pioneering European Union regulations that would consolidate various AI laws under a unified framework.

Meanwhile, Singapore recently released a model framework for Gen AI governance. Japan has emphasised a voluntary sector-specific, soft-law-based approach to promote AI governance, much like the approaches seen in the UK and US. Similarly, India is adopting a light-touch stance.

Similar principles

Policymakers and regulators across the Asia Pacific region are assessing whether existing AI frameworks remain suitable.

Many existing data privacy and security rules will still apply, even as nations diverge on regulatory philosophies, says Ross O’Brien, a consultant with Delta Analysis who specialises in technology.

‘We’re barely a year into many of these tools,’ O’Brien says. ‘There’s huge enthusiasm to use them to gain competitive advantage and speed up rote processes, but you’re still going to have a fiduciary responsibility to provide clients with accurate and comprehensive data that are thoroughly vetted.’

New legislation will likely evolve from existing data privacy, security and management legislation

A Deloitte report, issued this year and titled Generative AI: Application and Regulation in Asia Pacific, outlines four main regulatory approaches observed in the region: AI principles, guidance and tools, legislation, and national strategies.

Territories like Australia, Hong Kong, and Japan have adopted AI principles, most in the financial sector. Singapore has issued guidance and works closely with tech firms to develop responsible AI testing tools and help shape international standards. Its new Model AI Governance Framework for Generative AI aims to foster a trusted ecosystem. Other nations like the Philippines and Vietnam enacted sector-specific AI legislation. Still more are devising national strategies to spur AI development.

New from old

New legislation will likely evolve from existing data privacy, security and management legislation, rather than being written from scratch, O’Brien says, noting that a light regulatory touch is likely to be more effective, regardless of diverging approaches.

‘I don’t think there will be a dominant framework,’ he says. ‘I don’t see attempts to be comprehensive and definitive as being workable or even feasible.’

There has been ongoing debate over data privacy, cybersecurity and content ownership, he adds, and these discussions will heavily influence AI regulation.

‘In accounting, you’ll find out pretty quickly if these tools are robust enough’

‘These are probably going to be the guidance and the operating principles that all firms, including accounting firms, would need to use,’ he says. ‘If you’re going to adopt AI, what data are you going to be using? Who owns that data? You must think about whether customer data is being used properly.’

Eventually, tailored oversight may be needed for how companies use AI in financial and legal advice, O’Brien believes. ‘But I don’t think they’re [going to be] radically different from regulations that already govern the use of private data and other information resources.’

Overreliance on Gen AI introduces sizeable risks. Firms should consider regulation involving intellectual property, copyright and privacy – the risk of falling foul of existing privacy laws is not insignificant.

Off-the-shelf AI systems frequently store user data on external servers that are difficult to retrieve or delete entirely.

The International Association of Privacy Professionals outlines 12 risks that AI compliance officers should assess, like models exposing personal details, distorting a client’s content, or improperly sharing private information.

Fine-tuning

Then there is the risk of Gen Ai ‘hallucinating’ – or fabricating plausible falsehoods that users may mistake as facts.

A January 2024 Stanford study found popular, off-the-shelf chatbots often give erroneous legal advice when answering questions, and hallucinating around 75% of responses analysing a court ruling.

This presents clear problems for accountants relying on AI. ‘In accounting, you’ll find out pretty quickly if these tools are robust enough,’ says O’Brien.

Firms need to put some skin in the AI game, he adds. ‘There’s no magic box on tap that’s infallible. When your clients trust you with expert advice, the AI tools that you use need proper training – ideally using proprietary data insights that you have created and cultivated.’

Companies aiming to adopt AI will need to thoroughly translate all the expertise and insight into their own large language models, he explains. ‘You have to spend time feeding a model with queries, sense-testing the results, selecting among them and adding additional constraints.

‘It’s just like building and fine-tuning any machine.’

Advertisement