They have unleashed irresponsible innovations on the world and their slop generators have flooded academia, says Dr Craig Reeves

I’m not surprised to read that the field of artificial intelligence research is being overwhelmed by the very slop that it has pioneered (Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’, 6 December). But this is a bit like bears getting indignant about all the shit in the woods.

It serves AI researchers right for the irresponsible innovations that they’ve unleashed on the world, without ever bothering to ask the rest of us whether we wanted it.

But what about the rest of us? The problem is not restricted to AI research – their slop generators have flooded other disciplines that bear no blame for this revolution. As a peer reviewer for top ethics journals, I’ve had to point out that submissions are AI-generated slop. But many academic experts are not well-versed enough in slop to spot it so quickly. Nor, understandably, are they inclined to immerse themselves in the “genre” to get up to speed.

This means that the process of weeding out the slop will be slower and clog up the process. AI allows the volume of rubbish to be scaled up to unmanageable levels so that traditional quality‑control mechanisms like peer review are overwhelmed.