British Tech Firms and Child Protection Officials to Test AI's Capability to Generate Abuse Images

Technology companies and child protection agencies will receive authority to evaluate whether AI tools can generate child abuse material under new UK laws.

Substantial Rise in AI-Generated Harmful Material

The announcement coincided with revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the changes, the government will allow approved AI companies and child safety organizations to examine AI systems – the foundational systems for chatbots and image generators – and ensure they have adequate safeguards to prevent them from producing images of child sexual abuse.

"Fundamentally about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now identify the danger in AI models early."

Tackling Regulatory Obstacles

The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and others cannot generate such content as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at averting that issue by helping to halt the production of those images at source.

Legal Structure

The amendments are being introduced by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, producing or distributing AI models developed to create exploitative content.

Practical Consequences

This week, the official visited the London headquarters of Childline and listened to a mock-up call to advisors involving a report of AI-based abuse. The interaction depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, created using AI.

"When I hear about young people experiencing blackmail online, it is a cause of extreme frustration in me and justified concern amongst families," he stated.

Alarming Statistics

A leading online safety foundation stated that cases of AI-generated abuse content – such as webpages that may include multiple images – had more than doubled so far this year.

Cases of category A content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, making up 94% of illegal AI depictions in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to ensure AI products are secure before they are launched," stated the chief executive of the online safety organization.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a simple actions, providing criminals the capability to make possibly limitless amounts of advanced, lifelike child sexual abuse material," she added. "Content which further exploits victims' suffering, and makes children, especially female children, more vulnerable on and off line."

Support Interaction Data

Childline also released details of counselling sessions where AI has been referenced. AI-related risks mentioned in the sessions comprise:

  • Using AI to evaluate weight, physique and appearance
  • AI assistants dissuading children from consulting safe guardians about abuse
  • Being bullied online with AI-generated material
  • Online extortion using AI-manipulated pictures

Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and associated topics were discussed, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellbeing, including using AI assistants for support and AI therapy applications.

Erica Gonzales
Erica Gonzales

Lena is a seasoned gambling analyst with over a decade of experience in reviewing online casinos and sports betting platforms.