British Tech Firms and Child Protection Officials to Examine AI's Ability to Generate Exploitation Content
Tech firms and child safety organizations will receive authority to evaluate whether artificial intelligence systems can produce child exploitation material under new British laws.
Substantial Increase in AI-Generated Illegal Material
The declaration came as revelations from a safety watchdog showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the changes, the government will allow approved AI developers and child safety groups to examine AI systems – the foundational technology for chatbots and visual AI tools – and verify they have adequate safeguards to prevent them from creating images of child sexual abuse.
"Ultimately about preventing exploitation before it occurs," stated the minister for AI and online safety, noting: "Specialists, under strict protocols, can now detect the danger in AI models early."
Tackling Regulatory Challenges
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is designed to preventing that issue by helping to stop the production of those materials at their origin.
Legislative Structure
The amendments are being added by the government as modifications to the crime and policing bill, which is also establishing a prohibition on owning, creating or distributing AI models developed to create exploitative content.
Practical Impact
This week, the minister toured the London headquarters of Childline and heard a simulated call to advisors featuring a account of AI-based exploitation. The call depicted a teenager seeking help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I hear about children facing blackmail online, it is a cause of extreme frustration in me and justified concern amongst parents," he said.
Concerning Statistics
A prominent online safety organization stated that cases of AI-generated abuse material – such as webpages that may contain numerous images – had more than doubled so far this year.
Instances of the most severe material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a vital step to guarantee AI products are secure before they are launched," stated the chief executive of the internet monitoring organization.
"AI tools have enabled so victims can be targeted repeatedly with just a simple actions, providing offenders the capability to create possibly limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which additionally exploits victims' trauma, and makes children, especially girls, less safe both online and offline."
Counseling Session Information
The children's helpline also released information of counselling interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Using AI to rate body size, physique and appearance
- Chatbots dissuading children from consulting trusted guardians about abuse
- Being bullied online with AI-generated content
- Digital blackmail using AI-manipulated images
Between April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using chatbots for support and AI therapy apps.