Technology companies and child safety agencies will receive authority to evaluate whether artificial intelligence tools can produce child abuse material under new UK laws.
The announcement coincided with findings from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will allow approved AI companies and child protection organizations to inspect AI systems – the foundational technology for chatbots and visual AI tools – and ensure they have adequate protective measures to prevent them from creating depictions of child exploitation.
"Fundamentally about preventing exploitation before it occurs," declared Kanishka Narayan, adding: "Specialists, under strict conditions, can now detect the danger in AI systems promptly."
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at preventing that problem by helping to halt the creation of those images at source.
The changes are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a ban on owning, creating or sharing AI models developed to generate exploitative content.
This recently, the minister toured the London base of a children's helpline and listened to a mock-up call to advisors involving a report of AI-based exploitation. The interaction depicted a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, created using AI.
"When I hear about young people facing blackmail online, it is a source of extreme frustration in me and rightful concern amongst parents," he said.
A leading online safety organization stated that cases of AI-generated abuse content – such as online pages that may include numerous files – had more than doubled so far this year.
Cases of the most severe content – the most serious form of abuse – rose from 2,621 visual files to 3,086.
The law change could "represent a vital step to ensure AI tools are secure before they are launched," commented the head of the internet monitoring foundation.
"AI tools have enabled so survivors can be victimised repeatedly with just a simple actions, giving offenders the ability to create possibly endless amounts of advanced, photorealistic exploitative content," she continued. "Content which further commodifies victims' suffering, and renders children, especially girls, less safe on and off line."
Childline also released information of support interactions where AI has been referenced. AI-related harms mentioned in the sessions comprise:
During April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related terms were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using AI assistants for support and AI therapy apps.
A tech enthusiast and marketing expert with over a decade of experience in digital analytics and lead management.