British Tech Companies and Child Protection Agencies to Examine AI's Capability to Create Exploitation Content
Tech firms and child safety agencies will receive permission to assess whether artificial intelligence systems can generate child exploitation images under recently introduced UK legislation.
Significant Increase in AI-Generated Illegal Material
The declaration came as findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the authorities will allow designated AI companies and child safety organizations to examine AI systems – the underlying technology for chatbots and image generators – and ensure they have adequate protective measures to stop them from creating depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the risk in AI systems early."
Addressing Regulatory Challenges
The changes have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and others cannot create such images as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.
This legislation is designed to preventing that issue by enabling to halt the creation of those images at their origin.
Legislative Framework
The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on owning, creating or distributing AI systems developed to create child sexual abuse material.
Practical Impact
This recently, the official visited the London base of Childline and listened to a simulated call to advisors featuring a report of AI-based exploitation. The interaction portrayed a adolescent seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I hear about young people facing blackmail online, it is a cause of extreme frustration in me and rightful concern amongst parents," he stated.
Concerning Statistics
A prominent internet monitoring organization reported that instances of AI-generated abuse content – such as online pages that may contain numerous images – had more than doubled so far this year.
Cases of category A content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a crucial step to ensure AI tools are secure before they are launched," stated the head of the internet monitoring foundation.
"AI tools have made it so survivors can be targeted repeatedly with just a simple actions, providing offenders the capability to make possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which additionally commodifies victims' trauma, and renders young people, especially female children, less safe on and off line."
Counseling Interaction Information
The children's helpline also published information of counselling interactions where AI has been referenced. AI-related harms discussed in the sessions comprise:
- Employing AI to rate body size, physique and looks
- Chatbots discouraging children from consulting trusted adults about harm
- Being bullied online with AI-generated content
- Digital extortion using AI-manipulated pictures
Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and related topics were mentioned, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellness, encompassing utilizing AI assistants for assistance and AI therapeutic applications.