One of the key points of contention was over domestic mass surveillance. Experts have long warned that advanced AI is capable of taking scattered, individually innocuous data—like a person’s location, finances, search history—and assembling it into a comprehensive picture of any person’s life, automatically and at scale. Anthropic CEO Dario Amodei has said that this kind of AI-driven mass surveillance presents serious and novel risks to people’s “fundamental liberties” and that “the law has not yet caught up with the rapidly growing capabilities of AI.”
But while OpenAI said in a blog post it had reached a deal with the Pentagon that its technology would not be used for mass domestic surveillance or direct autonomous weapons systems, the two hard limits that Anthropic had refused to drop, some legal and policy experts have raised questions about a potential gap in the law.
Part of the dispute hinges on the murky legality of large-scale analysis of Americans’ data that is lawful under current U.S. statutes, even if it feels indistinguishable from mass surveillance.
“Right now, under U.S. law, it’s lawful for government authorities to buy up commercially available information from data brokers and other third parties,” said Samir Jain, the vice president of policy at the Center for Democracy & Technology. “If you buy up massive amounts of data and allow AI to analyze it, you may end up, in effect, engaging in mass surveillance of Americans through that process. It’s not currently restricted by law or prohibited by law.”















