New law will allow technology to be examined and ensure tools have safeguards to stop creation of material
Tech companies and child protection agencies will be given the power to test whether artificial intelligence tools can produce child abuse images under a new UK law.
The announcement was made as a safety watchdog revealed that reports of AI-generated child sexual abuse material [CSAM] have more than doubled in the past year from 199 in 2024 to 426 in 2025.
Under the change, the government will give designated AI companies and child safety organisations permission to examine AI models – the underlying technology for chatbots such as ChatGPT and image generators such as Google’s Veo 3 – and ensure they have safeguards to prevent them from creating images of child sexual abuse.
Kanishka Narayan, the minister for AI and online safety, said the move was “ultimately about stopping abuse before it happens”, adding: “Experts, under strict conditions, can now spot the risk in AI models early.”






