Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: Trump has an AI data center problem ahead of the midterms…Don’t trust AI to file your taxes…Anthropic’s AI tool Claude is central to US campaign in Iran, amid a bitter feud.

The debate around AI safety often focuses on the technology itself—how powerful models might become, or what risks they might pose. But the conflict this week involving Anthropic, OpenAI and the Pentagon points to a deeper problem: how much power over the future of AI is concentrated in the hands of a small number of corporate leaders and government officials deciding how these systems are built, deployed, and used.

For years, critics of the industry have warned about the risk of “industrial capture”—a future in which the development of powerful AI systems is concentrated among a handful of companies working closely with governments, leaving the safety of those systems dependent on the incentives and rivalries of the people running them. In 2023, for example, researcher Yoshua Bengio said the potential for the AI sector to be controlled by a few companies was the “number two problem” behind the existential risks posed by the technology.

So it’s not particularly reassuring to read yesterday about the disdain Anthropic CEO Dario Amodei expressed towards OpenAI CEO Sam Altman in leaked memo Amodei wrote to employees on Friday. Amodei’s angry missive, which was apparently sent over Anthropic’s Slack to all its employees, came after OpenAI announced a deal to provide AI to the Pentagon and Secretary of War Pete Hegseth said he was declaring Anthropic a “supply chain risk” for failing to come to a similar deal.