Internet Watch Foundation verified 8,029 pieces of realistic AI-made content, with 65% of videos in worst category

The amount of AI-generated child sexual abuse material found online rose by 14% last year, with the majority of videos showing the most extreme type of content, according to a safety watchdog.

The Internet Watch Foundation said it identified 8,029 AI-made images and videos of realistic child sexual abuse material (CSAM) in 2025. It added that there had been a more than 260-fold increase in videos.

The IWF said 65% of the 3,443 videos were classified as category A, the term for the most severe material under UK law. The corresponding figure for non-AI videos was 43%, said the watchdog, showing that the technology was being used to create more violent content.

Kerry Smith, the chief executive of the IWF, said: “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous.”