Former OpenAI employee Daniel Kokotajlo says progress to AGI is ‘somewhat slower’ than first predicted
A leading artificial intelligence expert has rolled back his timeline for AI doom, saying it will take longer than he initially predicted for AI systems to be able to code autonomously and thus speed their own development toward superintelligence.
Daniel Kokotajlo, a former employee of OpenAI, sparked an energetic debate in April by releasing AI 2027, a scenario that envisions unchecked AI development leading to the creation of a superintelligence, which – after outfoxing world leaders – destroys humanity.
The scenario rapidly won admirers and detractors. The US vice-president, JD Vance, appeared to reference AI 2027 in an interview last May when discussing the US’s artificial intelligence arms race with China. Gary Marcus, an emeritus professor of neuroscience at New York University, called the piece a “work of fiction” and various of its conclusions “pure science fiction mumbo jumbo”.
Timelines for transformative artificial intelligence – sometimes called AGI (artificial general intelligence), or AI capable of replacing humans at most cognitive tasks – have become a fixture in communities devoted to AI safety. The release of ChatGPT in 2022 vastly accelerated these timelines, with officials and experts predicting the arrival of AGI within decades or years.






