Alan Turing Institute: UK can't handle a fight against AI-enabled crims
Law enforcement is dealing with a significant gap in AI adoption, as noted by the National Crime Agency. The NCA plans to closely review the recommendations put forth by the Alan Turing Institute regarding the UK's capability to combat AI-enabled crime. This acknowledgment highlights the urgency for law enforcement to enhance their use of artificial intelligence in addressing evolving criminal activities. The NCA's commitment to assessing and potentially implementing the suggested strategies is a positive step towards strengthening their technological capabilities. It is crucial for law enforcement agencies to stay informed and proactive in leveraging AI tools to effectively combat modern criminal threats.

The National Crime Agency's Response to AI-Enabled Crime
The National Crime Agency (NCA) will "closely examine" the recommendations made by the Alan Turing Institute after it claimed the UK was ill-equipped to tackle AI-enabled crime. A report from the institute's Centre for Emerging Technology and Security (CETaS), published this week, had a few pointers - and advised the NCA to start by establishing a task force specifically looking at AI Crime within the next five years.
The Alan Turing Institute reckons that even though AI-enabled crime is still in its infancy, malign forces are upping their skillsets and UK law enforcement needs to adapt in kind. Asked about the recommendations, an NCA spokesperson told The Register: "The National Crime Agency highlighted the growing use of artificial intelligence to commit a range of high-harm crimes – including child sexual abuse, cybercrime, and fraud – in its National Strategic Assessment published in March, and we welcome the Alan Turing Institute bringing further attention to this issue. Their recommendations will be closely examined."
Challenges Highlighted by the CETaS Report
The core findings of CETaS' report were that the UK's police and other law enforcement agencies have been slow to adapt to the emergence of AI. It said the country was aware of the threat, but has done little to capitalize on it for defense. Two unnamed academics interviewed as part of the research expressed their concerns, with one saying there is an "enormous gap between the technical capability of law enforcement in the UK and the nature of the problem." Another said they were "very concerned about the police's ability to understand what is out there, deal with it and utilize AI itself."
The institute said AI-specific legislation may work in the long term to mitigate the harm AI can enable in the wrong hands, but in the short term law enforcement must be better at adopting, procuring, and mainstreaming AI as part of its routine crime-fighting efforts. In essence, it must fight AI with AI.
Efforts to Combat AI-Enabled Crime
The NCA acknowledged the threat of AI-enabled crime and said it is working to counter it. Alex Murray, director of threats at the UK's premier police force and the first national lead for policing AI, is exploring the use of AI to empower crime fighters and increase efficiencies. Despite AI-enabled crime still believed to be in the early stages of its evolution, it has already resulted in some highly successful heists.
The abuse of AI extends beyond mere Deepfakes. Cybersecurity experts have warned, at length, of the impact AI is having on phishing. The institute also warned of AI's role in helping scammers of all stripes, including of the romance variety, help craft messages to build stronger bonds with victims, all while using Deepfake tech to pass as celebrities, for example.
Early efforts to use AI to combat scams, whether those scams were AI-enabled or not, include UK telco O2's AI time-wasting granny Daisy, but sophisticated counters to fraud have not yet come to the fore. The current threat is clear, although as AI develops, it is expected that various software and solutions will empower criminals with greater capabilities, such as the automation of attacks that currently require manual control.
"As AI capabilities continue to advance, fraudsters will likely refine their use of LLMs and deepfake media, blending automated deception with strategic human oversight," the report stated. "Future scams may leverage increasingly convincing real-time AI interactions, reducing the need for direct human involvement. The evolution of AI-driven relationship-building tactics underscores the growing challenge of distinguishing between authentic and manipulated digital identities."