Militaries already use AI to detect what humans might miss. Now they also want an advice engine for commanders to consult in battle.April 21, 2026Stephanie Arnett/MIT Technology Review | Getty Images, Public Domain To call the conflict in Iran the first “AI war” would be, in many ways, incorrect. Algorithms that scour hours of surveillance footage and pick out, say, trucks with mounted machine guns date back to the war in Afghanistan. Ukraine has built drones that use AI to navigate autonomously. Israel has used AI systems to identify possible targets from intelligence data. Here’s what is new, in Iran and elsewhere: conversational AI systems that commanders turn to not just for analysis, but for advice. Unlike previous AI technologies, these advice engines are built on large language models. And they’re already reshaping how militaries share intelligence, work with Big Tech, and make life-and-death decisions. A decade ago, AI tools started to automate the work a junior intelligence analyst might do—picking out an important signal on a social media or satellite feed from lots of noise. Systems like the US military’s Maven, built mainly on technology created by the surveillance giant Palantir, fed that sort of analysis into tools that allow commanders to select a target to bomb halfway around the world, all through clean interfaces that look more like business software than the machinery of war. Now large language models are making these systems more interactive and capable of doling out advice. One US defense official told MIT Technology Review that today’s military personnel might give chatbots a list of potential targets to help decide which to strike first. Even though the Pentagon recently labeled Anthropic a supply chain risk, the company’s tool Claude has become so intertwined in military operations that the government says it needs six months to get rid of it.
The new war room
Militaries already use AI to detect what humans might miss. Now they also want an advice engine for commanders to consult in battle.








