This story has been updated to reflect events that took place after its initial publication.

Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement was emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. A few hours later, Altman announced in a post on X that OpenAI and the Pentagon had reached an agreement.

The meeting and the deal came at the end of a week where a conflict between Secretary of War Pete Hegseth and OpenAI rival Anthropic burst into public acrimony, ending with the apparent cancellation of Anthropic’s contracts with the Pentagon and with the federal government in general.

Altman told employees at the all-hands that the government is willing to let OpenAI build its own “safety stack”—that is, a layered system of technical, policy, and human controls that sit between a powerful AI model and real-world use—and that if the model refuses to perform a task, then the government would not force OpenAI to make it do so.

Altman said that OpenAI would retain control over how technical safeguards are implemented and which models are deployed and where, and would limit deployment to cloud environments rather than “edge systems.” (In a military context, edge systems are a category that could include aircraft and drones.) In what would be a major concession, Altman told employees that the government said it is willing to include OpenAI’s named “red lines” in the contract, such as not using AI to power autonomous weapons, conduct domestic mass surveillance, or engage in critical decision-making.