Whether you want to free it or regulate it into submission, one thing is clear: this new technology is moving so fast that we can’t fully grasp it

A

t an 80th birthday party at the weekend, I met an academic who was evasive about his field. When he finally disclosed “computer science”, I asked him why he hadn’t wanted to say, and he replied: “Because I cannot have one more conversation about AI.” I couldn’t ask him why not, because of stupid manners; that would have been one more conversation about AI. But I don’t want to have another conversation about AI either.

Nobody’s opinion, whether utopian or dystopian, seems to keep up with the thing itself, so everything has the laggy, outdated feeling of a BBC Radio 4 afternoon play about AI. There was one last week, and I listened to it patchily, thinking: “If AI had written this, it would have made a more sophisticated evaluation of the threat posed by itself, and been less hammy, unless the instruction had specifically been, ‘Write some dialogue in the style of a pretend-family on a party political broadcast from the 90s.’”

There are cheerleading bystanders, the people who trust that technological advance is generally productive and good. Rather than engage with any of the crunchy reality of this vast terrain, they’ll tell you instead that every discovery in history was scary at first, and yet was a slab on the crooked, miraculous path to enlightenment. Invariably, there’ll be a bit in the middle where I’ve stopped listening and started thinking about space being pointless, then wham, by the end AI will definitely cure cancer. This is often quite plausible, by the way. There’s a resigned dehumanisation, as if discovery were a thing we consumed, as if we could outsource intellectual adventure and merely reap its rewards, putting no dent in our delight or meaning. But then what do I know? I’ve never discovered anything. And you can hardly say to someone whose cancer is uncured: “But I wanted to have a crack at fixing it myself.”