OpenAI CEO Sam Altman and other senior executives took to social media over the weekend to defend their decision, announced on Friday, to strike a deal with the Department of War to allow the company’s models to be used in classified military networks. The deal came hours after archrival Anthropic turned down a similar agreement with the Pentagon and the Trump administration said it was labeling Anthropic a “supply-chain risk.”

OpenAI faced vocal backlash for agreeing to the Pentagon deal after Altman had earlier in the week voiced support for Anthropic’s position that it would not accept a Pentagon contract that did not contain explicit prohibitions on its AI technology being used for mass surveillance of U.S. citizens or being incorporated into autonomous weapons that can make a decision to strike targets without human oversight.

Some of these critics have even started a campaign to persuade ChatGPT users to stop using that AI model and switch to Anthropic’s Claude chatbot. There was some evidence the campaign was having an effect, too: Claude surged past ChatGPT to become the most downloaded free app in Apple’s App Store. The sidewalk outside OpenAI’s offices in San Francisco was also covered with chalk graffiti attacking its decision to cut a deal with the Pentagon, while graffiti outside Anthropic’s offices largely praised its decision to refuse a contract that did not include prohibitions on the use of its AI models for mass surveillance and autonomous weapons.Some of Altman’s and OpenAI’s social media push over the weekend seemed aimed at quelling concerns among the company’s own employees over the Pentagon contract. Many rank-and-file OpenAI employees had signed an open letter last week supporting Anthropic’s refusal to accede to the Pentagon’s demands and opposing its decision to designate Anthropic a supply-chain risk. (Altman also said over the weekend that he disagreed with the supply-chain risk designation.)And at least one OpenAI employee publicly questioned whether the company’s contract with the Pentagon provided robust safeguards. Leo Gao, an OpenAI employee who works on making sure increasingly powerful AI models stay aligned with user intentions and human values, criticized his employer on X for agreeing to let the DOW use its technology for “all lawful purposes” and then engaging in what Gao called “window dressing” to make it seem like there were further restrictions on what the Pentagon could do with OpenAI’s GPT models.Altman admitted in an “Ask Me Anything” session on social media platform X on Saturday night that its deal with the Pentagon “was definitely rushed, and the optics don’t look good.”But he insisted that OpenAI moved quickly to make the deal because it wanted to de-escalate the increasingly heated situation between the U.S. military and Anthropic. The fight potentially threatened to damage the AI industry as a whole, in part by raising the prospect of the U.S. government nationalizing an AI lab or at least using its power to coerce a private company to deliver technology on its preferred terms.“If we are right and this does lead to a de-escalation between the DOW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry,” Altman said. “If not, we will continue to be characterized as rushed and uncareful.”