Author

Jacob Solon is a freelance writer and researcher, and Peter McBurney is professor of computer science at King’s College London

Recent developments in artificial intelligence (AI), especially with so-called large language models such as ChatGPT, have received a great deal of public attention. Some governments have responded with new laws and regulations for the development and use of AI systems, while others have published sets of foundational principles and guidelines to advise organisations developing or using AI systems (see below). China and the European Union are among the rule-makers, Singapore and the UK among the guideline-setters.

The broadest and most detailed law is the EU AI Act of 2024, whose provisions started coming into force from late 2024. This law is relevant even for companies based outside Europe, if they have operations or customers in the EU. As with the EU’s General Data Protection Regulation (GDPR), EU laws have an impact globally.

Concerned to avoid possible human harms, the EU’s approach is risk-based

European Union

Concerned to avoid possible human harms from AI, the EU AI Act adopts a risk-based approach. Organisations that seek to develop or deploy AI systems are required to assess the risks of any proposed or existing system, using various specified assessment criteria.

Certain systems and applications are prohibited – for example, those that use subliminal or deceptive techniques, and those using biometric data to infer the beliefs of people. Systems that are not prohibited but deemed to be high risk must register with the European Commission. Even systems that are not high risk must be monitored over time in case their risk profile changes.

The fines imposed by the act for non-compliance are heavy – up to 7% of annual global revenues for prohibited systems, and up to 3% for high-risk AI systems. Given these new requirements and large fines, the definition of an AI system under the act becomes crucially important.

Major companies doing business in the EU will need to exercise AI managerial governance

There is some ambiguity in the act itself, so an early task for the European Commission was to issue guidance on the scope of the act. Draft guidelines on the definition of AI were published in February 2025, and they provide some assistance for organisations wondering if their AI applications will come within scope of the new law.

The draft guidelines exclude some systems that many people would not consider to be AI – for instance, those using regression models or mathematical optimisation techniques, and data-processing applications following automated rules, such as database management systems. But expert and knowledge-based systems – systems that encode the expertise of human domain experts, such as doctors undertaking specialist medical diagnoses – appear to be within scope. The guidelines do not seem to make crystal-clear the boundary between rule-based systems that are within scope and those that are not.

One aspect of the new act is, however, very clear: major companies and organisations doing business in the EU or with EU citizens will need to exercise appropriate managerial governance of their AI applications. This governance will have to be ongoing, especially as the functionality and intelligence of AI systems may grow over time.

China

Like the EU, China has recently adopted laws and regulations specific to AI technology, although only for generative AI. The country’s deep synthesis provisions came into force in January 2023, regulating providers of artificially generated content. Generative AI measures were jointly adopted by seven central government agencies and took effect in August 2023. They apply to generative AI services offered to the public in China, regardless of the country of location of the AI service provider.

The gen AI regulations require, for instance, that providers of AI-generated content put a ‘generated by AI’ label on that content. Organisations engaged in AI development must establish a review committee if the research is deemed to be ‘ethically sensitive’.

Singapore suggests third-party testing against AI standards to build trust

Singapore

The Singapore government has taken a different approach to China and the EU. Rather than new AI-specific laws or regulations, the authorities in Singapore have developed frameworks to help companies govern and manage AI systems better. They include a model AI governance framework (2019, updated 2020) and a model framework for generative AI (2024).

The frameworks propose principles that organisations developing or deploying AI systems are advised to apply and adhere to. For instance, it is suggested that organisations adopt third-party testing against common AI standards to build trust among end-users of these systems.

The UK has focused attention on the major dangers of AI

UK

Unlike the EU or China, the UK has not created regulation specifically for AI. Instead, the UK government has outlined a framework and focused attention on the major dangers of AI.

The AI white paper of March 2023 (updated in August 2023) presented initial proposals for a pro-innovation regulatory framework for AI. This established five principles for existing industry regulators to apply in their respective domains. For instance, the first principle is that ‘AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed’.

At a global summit on AI safety organised by the British government in November 2023, several key risks of AI systems were identified: they may generate misinformation; they may be misused by malicious actors; and they may pursue their own goals, possibly contrary to those of humanity.

More information

See more from AB on the EU AI Act and AI regulation in Asia

Advertisement