WASHINGTON/SAN FRANCISCO, Jan 29 (Reuters) - The Pentagon and artificial-intelligence developer Anthropic are at odds over potentially eliminating safeguards that might allow the government to use its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters.

The discussions represent an early test case for whether Silicon Valley – in Washington’s good graces after years of tensions – can sway how U.S. military and intelligence personnel deploy increasingly powerful AI on the battlefield. After weeks of talks under a contract worth up to $200 million, the U.S. Department of Defense and Anthropic are at a standstill, six people familiar with the matter said, on condition of anonymity. The company’s position on how its AI tools can be used has intensified disagreements between it and the Trump Administration, details of which have not been previously reported.

In line with a January 9 Defense Department memo on its AI strategy, Pentagon officials have argued that they should be able to deploy commercial AI technology regardless of companies’ usage policies, so long as they comply with U.S. law, the people said.

A spokesperson for the department, which the Trump administration renamed the Department of War, did not immediately respond to requests for comment.