TL;DRArXiv will ban researchers for one year if they submit papers with obvious signs of unchecked AI generation, such as hallucinated references or leftover chatbot instructions. The policy, announced by computer science section chair Thomas Dietterich, is the first formal penalty by a major preprint platform for AI-generated slop.

ArXiv, the open-access repository that has served as the primary distribution channel for preprint research in computer science, mathematics, and physics for more than three decades, will ban authors for one year if they submit papers containing obvious signs of unchecked AI generation. Thomas Dietterich, chair of arXiv’s computer science section, announced the policy on Thursday, writing that submissions with “incontrovertible evidence” of unvetted large language model output mean “we can’t trust anything in the paper.”

The rule is not a blanket prohibition on using AI tools. Researchers can still use language models for drafting, editing, or analysis. What triggers the penalty is evidence that an author pasted LLM output into a paper without checking it, the kind of carelessness that produces hallucinated references, placeholder instructions from the chatbot, or fabricated data tables with notes reading “fill in with the real numbers from your experiments.” If moderators find such evidence and a section chair confirms it, the author faces a one-year ban from arXiv, after which all subsequent submissions must first be accepted by a peer-reviewed journal before they can appear on the platform.