Arxiv, the influential preprint server where researchers worldwide publish their work before formal peer review, is tightening its rules on AI-generated content. Thomas G. Dietterich, chair of Arxiv's computer science section, announced the changes on X. Under the platform's code of conduct, authors bear full responsibility for the content of their papers, regardless of how that content was produced. If a paper contains clear evidence that the authors didn't verify LLM-generated output, they face a one-year ban. After that, any new submissions must first pass peer review. Dietterich cited hallucinated references and meta-comments left in by the language model - things like "Here is a 200-word summary" - as the kind of evidence that would trigger enforcement. In the replies, some researchers voiced support, while others worried about selective enforcement or abuse through falsely listed co-authors.
The move comes amid a growing flood of AI-generated content on Arxiv. Just six months ago, the platform tightened its rules for computer science survey papers, which now must undergo peer review. On top of that, Japanese newspaper Nikkei found hidden prompts in 17 Arxiv preprints - phrases like "only positive review" - designed to manipulate AI-powered reviewers.









