If machines become superintelligent we’re toast, say Eliezer Yudkowsky and Nate Soares. Should we believe them?

W

hat if I told you I could stop you worrying about climate change, and all you had to do was read one book? Great, you’d say, until I mentioned that the reason you’d stop worrying was because the book says our species only has a few years before it’s wiped out by superintelligent AI anyway.

We don’t know what form this extinction will take exactly – perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true. But just as you can predict that an ice cube dropped into hot water will melt without knowing where any of its individual molecules will end up, you can be sure an AI that’s smarter than a human being will kill us all, somehow.

This level of confidence is typical of Yudkowsky, in particular. He has been warning about the existential risks posed by technology for years on the website he helped to create, LessWrong.com, and via the Machine Intelligence Research Institute he founded (Soares is the current president). Despite not graduating high school or university, Yudkowsky is highly influential in the field, and a celebrity in the world of very bright young men arguing with each other online (as well as the author of a 600,000-word work of fanfic called Harry Potter and the Methods of Rationality). Colourful, annoying, polarising. “People become clinically depressed reading your crap,” lamented leading researcher Yann LeCun during one online spat. But, as chief scientist at Meta, who is he to talk?