Jo Riches, journalist



Studying this article and answering the related questions can count towards your verifiable CPD if you are following the unit route to CPD, and the content is relevant to your learning and development needs. One hour of learning equates to one unit of CPD.
Multiple-choice questions

Much like buses, a new generation of artificial intelligence tools have been long anticipated and now three have arrived all at once. More than a million of us tried the ChatGPT tool in the first days of its release, while Microsoft’s Bing and Google’s Bard AI chatbots are similarly poised to revolutionise how information is accessed and deployed – even if doubts persist about their reliability.

Machine learning can work out how to recognise new spam characteristics as they evolve

Clever(ish) search engines are just the start. Advanced AI can now learn from events and does not need to rely on explicit human commands. Trained on vast data troves, it uses algorithms and statistical models to analyse and draw conclusions from the most up-to-date information before performing prescribed actions.

AI has already been adopted for various applications by sectors including government, finance and retail. But there is one challenge that concerns all organisations: can AI protect us against the ever-present risk of cybercrime?

New threats

Hackers are still depicted as solitary figures in hoodies, hunched over a glowing laptop in a dark room. It is an image that does not accurately reflect today’s cybercrime landscape, which is populated by criminals operating as business-like syndicates. Using increasingly sophisticated models of organisation and distribution, hackers no longer need to be coding whizzkids.

Taking a leaf out of the software-as-a-service playbook, ‘ransomware-as-a-service’ platforms provide their malware in return for a subscription fee and a percentage of any profits.

Two-thirds of executives consider cybercrime their biggest threat this year

No surprise then that two-thirds of executives consider cybercrime their most significant threat this year, as reported by PwC in its 2023 Global Digital Trust survey. A McKinsey study found 85% of SMEs in the US intend to increase IT security spending. Meanwhile, the acceleration of digital transformation in terms of mobile device use, remote working, internet-enabled appliances and round-the-clock cloud connectivity means network perimeters have become less clearly defined and are therefore more challenging for IT departments to defend.


The AI era has brought a raft of supercharged security tools. One of the most significant advances is the use of machine learning algorithms that have been taught to identify patterns within large data sets. Particularly useful in detecting malware, phishing attempts and service disruption attacks, these algorithms also power the continuous analysis of network traffic, identifying anomalies that could be of concern.

Natural language processing also bolsters cyberdefences. Able to understand and analyse text-based data, it can flag up malicious emails, texts or chat messages indicative of a concerted attack.

Soon we can expect to see increased use of security solutions that integrate these fast-developing technologies:

  • Staffing support. The global cybersecurity workforce has not kept pace with rising threats. It now falls short by 3.4 million workers, according to the latest figures, with 700,000 unfilled security jobs in the US alone. Algorithms that automate routine and time-consuming tasks can boost the efficiency and effectiveness of security operations. ‘While AI cannot create new people to fill these posts, it can speed up their work and act as a force multiplier for the ones we do have,’ says Jeff Crume, IBM engineer and cybersecurity architect.
  • Monitoring and detection. AI is a powerful ally when it comes to identifying unusual use patterns. Red flags might include a user who starts accessing sensitive files, laptops communicating with unusual services, or even an employee who seems to be using their keyboard uncharacteristically. Advances in biometrics, facial and voice recognition are other strings to AI’s bow, beefing up authentication systems and preventing unauthorised access to sensitive data.
  • Phishing/spam filtering. Standard cybersecurity software detects potentially malicious emails as it scans for known suspect words. Machine learning takes this up a gear, learning to recognise new spam characteristics as they evolve, and adapting surveillance as phishing becomes more sophisticated. These technologies also see the big picture, using predictive analytics to identify patterns and issue alerts before breaches occur.
  • Automated reactions. Threat responses can be programmed to trigger automatically. An AI system could quarantine an infected computer, shut down a malicious process or block a suspicious IP address. Since AI analyses security events in real time and has powerful processing capacity, it makes decisions much more quickly than humans can, powering faster incident responses. Security teams gain precious time for damage prevention or limitation as a result.

Chatbots can automate impersonations of legitimate personnel

Adversarial AI

The situation becomes more complex, however, when criminals leverage these same AI capabilities.

Chatbots can be programmed to mimic human conversations, allowing criminals to automate impersonations of legitimate personnel. Other examples include disseminating malware, capturing personal information and executing financial scams.

Phishing attacks are a key use-case, with attackers now deploying machine learning to generate convincing emails. These can be highly effective at tricking recipients into releasing sensitive information such as log-in credentials or financial details. AI is also increasingly adept at producing ‘deep fake’ videos, potentially heralding a new vector of threat scenarios.

Careful evaluation

The prospect of adversarial AI is a significant driver behind the current accelerated investment in defensive AI strategies. The UK’s National Cyber Security Centre suggests that large organisations able to employ dedicated cybersecurity teams are best placed to adopt them. There are benefits for SMEs, but they are advised to consider AI’s value proposition in relation to setup and support costs.

Karen Danesi, deputy director of capability at NCSC, advises careful evaluation to ensure that AI tools will indeed add value. She says: ‘AI has the potential to transform entire industries, and we will see it make significant contributions to many fields, including cybersecurity, in the future.

‘This technology is still developing, however, and we would encourage organisations to familiarise themselves with our bespoke guidance to establish whether AI can offer the most practical or advantageous solution to their cybersecurity needs.’

Cyber insights

Click here for free factsheets, advice, CPD and other ACCA resources relating to cybersecurity