British Technology Companies and Child Safety Officials to Examine AI's Capability to Create Exploitation Content

Technology companies and child safety organizations will be granted permission to evaluate whether artificial intelligence tools can produce child abuse images under recently introduced UK laws.

Substantial Increase in AI-Generated Illegal Content

The announcement came as revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the government will permit designated AI companies and child safety organizations to examine AI models – the foundational technology for conversational AI and image generators – and verify they have adequate protective measures to stop them from creating images of child sexual abuse.

"Ultimately about preventing abuse before it happens," stated Kanishka Narayan, noting: "Experts, under strict protocols, can now identify the danger in AI models promptly."

Addressing Regulatory Obstacles

The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a testing process. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at averting that problem by helping to stop the production of those materials at source.

Legislative Structure

The changes are being added by the authorities as revisions to the criminal justice legislation, which is also establishing a ban on possessing, producing or distributing AI models designed to generate child sexual abuse material.

Practical Consequences

This week, the minister visited the London headquarters of Childline and heard a simulated call to counsellors featuring a report of AI-based abuse. The call depicted a adolescent requesting help after facing extortion using a sexualised deepfake of themselves, constructed using AI.

"When I hear about young people facing extortion online, it is a cause of extreme frustration in me and justified anger amongst families," he stated.

Alarming Data

A prominent internet monitoring foundation stated that instances of AI-generated exploitation content – such as webpages that may include multiple files – had more than doubled so far this year.

Instances of the most severe material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "represent a crucial step to guarantee AI products are secure before they are released," commented the chief executive of the online safety foundation.

"AI tools have made it so survivors can be victimised repeatedly with just a simple actions, giving criminals the capability to make possibly limitless quantities of sophisticated, lifelike exploitative content," she added. "Material which further commodifies victims' suffering, and renders children, especially female children, more vulnerable on and off line."

Support Session Information

The children's helpline also published details of support sessions where AI has been referenced. AI-related risks mentioned in the sessions comprise:

  • Using AI to evaluate body size, body and appearance
  • Chatbots dissuading young people from talking to trusted adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated images

During April and September this year, Childline delivered 367 support sessions where AI, chatbots and related terms were discussed, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapeutic apps.

David Rose
David Rose

A passionate writer and mindfulness coach dedicated to helping others find peace and purpose through practical advice and shared experiences.