The technology most people use only as a chatty tool for daily tasks is reportedly aiding US military aggression. And there is not much we can do about it

T

here are a lot of things that AI can do. It can sort out your shopping list, and it can keep your kids entertained when they’re mutinous by spinning up a tailor-made bedtime story for them. It can make you more efficient at work, and can help our government operate more effectively.

What is written less about, and what we need to shout louder about now, are the risks inherent in the militarisation of AI. In the last three months Donald Trump’s White House has reportedly used AI twice to effect regime change, or to – in the most recent case in Iran – get as close to doing so as possible, and leaving it up to rank-and-file Iranians to finish the job.

First, Anthropic’s Claude AI model – which most people use as a slightly more discerning alternative to ChatGPT – was supposedly used both to plan and execute the snatching of Nicolás Maduro from his compound in Venezuela, but it’s unclear how the model was used in detail. Then this weekend, we learn that the AI tool was used again, to parse through intelligence that helped aid the hugely damaging barrage of missiles that have rained down on Iran, apparently for identifying targets and running simulations.