An increase in sophisticated AI-generated images of child abuse could result in police and other agencies chasing "fake" rather than genuine abuse, a charity has said.
The Internet Watch Foundation (IWF), based in Histon, near Cambridge, finds, flags, and removes images and videos of child sexual abuse from the web.
However, the ever-increasing use of AI images - a 300% increase in 2024 compared to the previous year - has added another layer of complexity to their work.
Dan Sexton, IWF chief technology officer, said there was now a risk that law enforcement and other agencies could be "trying to rescue children that don't exist or not trying to rescue children because they think they're AI".
"About two years ago we first started seeing this content being circulated, and there were little 'tells' - it looked distinctly different," Mr Sexton said.






