Lawyers for the Department of War and Anthropic sparred in a California federal court on Tuesday over Anthropic’s challenge to the Pentagon labeling it a “supply-chain risk” to national security and banning all government contractors from using the company’s sweeping AI tools. Anthropic is seeking an injunction barring enforcement of that order.

The case—which involves a historic first in that the Department of Defense, informally renamed the Department of War (DOW) by the Trump administration, labeled a U.S.-led business as a supply-chain risk to national security—is rooted in a contract negotiation that escalated quickly. The DOW wanted to add a blanket “all lawful use” clause to its contracts with the AI firm so the military could use Anthropic’s Claude tool for any legal purpose.

The presiding judge in the case expressed doubts about the sweeping authority the Pentagon had wielded in the case. Federal District Judge Rita Lin said she would issue a ruling on Anthropic’s legal challenge “in the next few days,” and spent Tuesday’s hearing asking the parties questions about their disagreement.

A heated dispute over how to use AI

During contract negotiations with the Pentagon in February, Anthropic balked at the possibility of the military using Claude for lethal autonomous warfare and mass surveillance of Americans, and attempted to insist on provisions expressly forbidding such use. Anthropic, led by founder Dario Amodei, said it hasn’t thoroughly tested those uses and doesn’t believe they work safely. The DOW claimed those guardrails were unacceptable and that military commanders need latitude to make determinations on missions.