Author

Peter McBurney is professor of computer science in the Department of Informatics at King’s College London

1
unit

CPD

Studying this article and answering the related questions can count towards your verifiable CPD if you are following the unit route to CPD, and the content is relevant to your learning and development needs. One hour of learning equates to one unit of CPD.
Multiple-choice questions

The recent launch of several systems that can engage in intelligent conversations with people, such as OpenAI’s ChatGPT or Bing’s Sydney, has focused a great deal of attention on artificial intelligence (AI). 

There are rising concerns among legislators about the potential risks of AI and of automated decision-making. For example, the European Commission, after wide consultation, has proposed new laws that will require large companies and organisations to assess the risks of any AI system they create or use, and to register systems deemed to be high risk with a new regulatory agency. The proposed EU AI Act is expected to pass the European Parliament in 2023 or 2024.

It is essential to assess the accuracy and ethical consequences of AI systems

As with previous European regulations on data protection, the EU’s AI regulations will impact businesses worldwide, not only in Europe. The law will specify what factors any risk assessment of AI systems must include, and compliance with it may well be complex.

Regulators in the EU are not the only ones concerned about AI. In January 2023, the Netherlands established a new unit within its national data protection authority (Autoriteit Persoonsgegevens) specifically to regulate algorithms. Dutch government entities are already required to register the algorithms they use in their activities.

So what are the factors that a risk assessment of AI systems will need to consider? 

Accuracy 

The first is accuracy, whether of prediction or classification. Because errors or inaccurate predictions may have deleterious consequences, both for the subject of any automated decision and for the company with the AI, it becomes essential to assess the accuracy and ethical consequences of AI systems.

Eliminating bias may reduce the accuracy of the AI – trade-offs might be necessary

Over many years, statisticians have developed a number of different measures of accuracy, but none works for every application. Associated with accuracy are issues of robustness and sensitivity – that is, how stable the outputs of a system are as its inputs change.

Bias 

A second key factor in risk assessment is the extent to which the system is biased, or not representative of its target population. Bias may enter in the data used to train the system, or in the input data used when the system is run in production. Bias may also enter from the AI algorithms, although bias here is usually difficult to detect or to fix.   

Most countries have laws precluding the use of certain protected personal characteristics (gender, ethnicity, religion, etc) in making decisions, so these variables may be excluded from any AI system, which may make it difficult to assess bias. Eliminating bias may reduce accuracy from the AI system, and managers may need to trade off one factor against the other.

Having outsiders sit on ethical risk assessment panels helps avoid groupthink

Related to bias are ethical concerns over the use of an AI system. For example, by using data from an application on a mobile phone, a motor insurance company may be able to learn where its customers drive to and from. Use of this location data without proper permission may be a breach of privacy regulations, and may lead to ethical concerns. 

Transparency 

A third area in risk assessment involves the transparency and explainability of the AI system. Can the AI system explain its decisions or recommendations? Unfortunately, most machine learning systems can’t, at least not in ways that a human can understand.

A new research area in computer science, called explainable AI (XAI), has arisen, which tries to find ways to automatically generate human-understandable explanations of AI systems alongside the AI systems themselves.  

Governance

Finally, issues of governance process are of vital importance, particularly for data. In software development, it is standard practice to make copies of every version of new software or a new application in order to be able to recover previous versions if needed. The same practice is becoming commonplace with data. Secure copies are kept of all datasets, before and after any preprocessing, whether the data is used for training the AI system or as input data for prototypes or for systems in production. 

Keeping prior copies makes it possible to reconstruct past versions of applications if needed. Record-keeping of datasets does not usually come naturally to data scientists, so governance is often needed here. 

New regulations for AI create opportunities for companies able to assist in risk assessment

Expertise and ethics

These various risk factors span a range of disciplines and perspectives: legal and regulatory, technical (both computer science and statistical analysis), and commercial operations. Most organisations will need to create multidisciplinary teams to undertake these assessments. In insurance, pharmaceuticals and some other industries, companies already have statistical and data science expertise, but this is not true of every sector.   

Where ethical issues arise, it is useful to have outsiders sit on risk assessment panels, as that helps reduce the possibility of ‘group think’. Including outsiders is standard practice for ethics committees in pharmaceutical industries, for example.

The emergence of new regulations for AI creates opportunities for companies able to assist in risk assessment of these important applications. 

More information

See also Peter McBurney’s AB article ‘Welcome to the metaverse

Advertisement