OpenAI CEO Sam Altman defends decision to strike Pentagon deal after Anthropic blacklisting, admits ‘optics don’t look good’
· Fortune

OpenAI CEO Sam Altman and other senior executives took to social media over the weekend to defend their decision, announced on Friday, to strike a deal with the Department of War to allow the company’s models to be used in classified military networks. The deal came hours after archrival Anthropic turned down a similar agreement with the Pentagon and the Trump administration said it was labeling Anthropic a “supply-chain risk.”
Visit solvita.blog for more information.
OpenAI faced vocal backlash for agreeing to the Pentagon deal after Altman had earlier in the week voiced support for Anthropic’s position that it would not accept a Pentagon contract that did not contain explicit prohibitions on its AI technology being used for mass surveillance of U.S. citizens or being incorporated into autonomous weapons that can make a decision to strike targets without human oversight.
Some of these critics have even started a campaign to persuade ChatGPT users to stop using that AI model and switch to Anthropic’s Claude chatbot. There was some evidence the campaign was having an effect, too: Claude surged past ChatGPT to become the most downloaded free app in Apple’s App Store. The sidewalk outside OpenAI’s offices in San Francisco was also covered with chalk graffiti attacking its decision to cut a deal with the Pentagon, while graffiti outside Anthropic’s offices largely praised its decision to refuse a contract that did not include prohibitions on the use of its AI models for mass surveillance and autonomous weapons.
Some of Altman’s and OpenAI’s social media push over the weekend seemed aimed at quelling concerns among the company’s own employees over the Pentagon contract. Many rank-and-file OpenAI employees had signed an open letter last week supporting Anthropic’s refusal to accede to the Pentagon’s demands and opposing its decision to designate Anthropic a supply-chain risk. (Altman also said over the weekend that he disagreed with the supply-chain risk designation.)
And at least one OpenAI employee publicly questioned whether the company’s contract with the Pentagon provided robust safeguards. Leo Gao, an OpenAI employee who works on making sure increasingly powerful AI models stay aligned with user intentions and human values, criticized his employer on X for agreeing to let the DOW use its technology for “all lawful purposes” and then engaging in what Gao called “window dressing” to make it seem like there were further restrictions on what the Pentagon could do with OpenAI’s GPT models.
Altman admitted in an “Ask Me Anything” session on social media platform X on Saturday night that its deal with the Pentagon “was definitely rushed, and the optics don’t look good.”
But he insisted that OpenAI moved quickly to make the deal because it wanted to de-escalate the increasingly heated situation between the U.S. military and Anthropic. The fight potentially threatened to damage the AI industry as a whole, in part by raising the prospect of the U.S. government nationalizing an AI lab or at least using its power to coerce a private company to deliver technology on its preferred terms.
“If we are right and this does lead to a de-escalation between the DOW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry,” Altman said. “If not, we will continue to be characterized as rushed and uncareful.”
He added that “a good relationship between the government and the companies developing this technology is critical over the next couple of years.”
And he said he was opposed to Anthropic being labeled a supply-chain risk. “Enforcing the [supply-chain risk] designation on Anthropic would be very bad for our industry and our country,” Altman said. “To say it very clearly: I think this is a very bad decision from the DOW, and I hope they reverse it. If we take heat for strongly criticizing it, so be it.”
OpenAI said that it had found a compromise approach that preserved the same limitations while also acceding to the military’s wish that it not have contractual constraints on how it uses the AI tech it purchases. The company said that limits on how its AI can be used rest on both references to existing law that it has put in the DOW contract and technical limitations on what its AI models will be able to do.
It said the DOW agreed to let it build these technical limitations. The technical limitations will include systems that would classify any of the prompts DOW users feed OpenAI’s models and refuse any that the classifier deems might violate OpenAI’s redlines. It also may include fine-tuning of OpenAI’s models so that they won’t easily comply with instructions that violate the two redlines.
OpenAI says its contract attempts to bind Pentagon to current law
OpenAI published a portion of its contract with the DOW in which it said it agreed that its technology could be used “for all lawful purposes” but which also included specific references to existing U.S. laws and Department of War policy documents that establish limitations on the surveillance of U.S. citizens and on how autonomous weapons can be deployed.
Katrina Mulligan, OpenAI’s head of national security partnerships and a former chief of staff to the secretary of the Army, said during the Ask Me Anything on X that referencing these existing laws and policies provided more assurance that the Pentagon would not later violate the company’s redlines than some critics suggested. “We accepted the ‘all lawful uses’ language proposed by the department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract,” she said. “And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.”
Some legal experts pushed back on Mulligan’s position, at least as far as DOW policies on autonomous weapons are concerned. Charles Bullock, a senior fellow at the Institute for Law & AI, said on X that “DOW can, of course, change its own policies whenever it wants,” and that the contract language OpenAI released does not require the DOW to follow the existing policy in perpetuity. But he said that the contract did seem to bind DOW to following existing interpretations of existing laws governing mass surveillance of U.S. citizens.
Bullock also said it was impossible to know how ironclad the limitations contained in OpenAI’s contract are without assessing the entire contract, not just the small section OpenAI made public. OpenAI has said government rules bar it from publishing the entire contract because it is for a classified system.
A debate over the definition of ‘mass surveillance’
Many of those skeptical of OpenAI’s agreement with the Pentagon noted that the term “mass surveillance” is not well-defined and questioned OpenAI executives on what would happen if military intelligence agencies attempted to use its AI models to analyze commercially available data—such as cell phone location data or data from fitness apps—that could be put together at scale to conduct surveillance of U.S. citizens in America. The Defense Intelligence Agency is believed to have purchased such data, and its use remains a legal gray area. Anthropic, according to a story in The Atlantic, was particularly concerned about the Pentagon using its technology for this kind of analysis and that its insistence on curtailing that use case was one of the major stumbling blocks to breaking its deadlock with the DOW.
“We can’t protect against a government agency buying commercially available datasets, but our contract incorporates a prohibition on mass domestic surveillance as a binding condition of use,” Mulligan said during the AMA.
She also said that OpenAI’s decision to rely on a multipronged approach that included technical systems to limit what the Pentagon could do provided a more robust solution than simply relying on contractual language, which she said seemed to be Anthropic’s primary approach. She said Anthropic had not been able to lean on this technical solution because it was already providing versions of its AI models to the military that had some of the usual safeguards removed.
“Anthropic has primarily been concerned with usage policies, which is because their existing classified deployments involve reduced or removed safety guardrails (making usage policies the primary safeguards in national security deployments),” she said. “Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. That’s what we pursued in our negotiations, and that’s why we think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”
Another OpenAI executive, Boaz Barak, who works on AI alignment and safety, also represented the company in the AMA and criticized Anthropic for fixating so heavily on contractual language and not other kinds of safeguards. “I get the impression that folks at Anthropic had unrealistic expectations for the contract stuff,” he said in response to a question from former OpenAI policy chief Miles Brundage, noting that tech companies were always going to be somewhat at the mercy of how the DOW interpreted terms in the contract.
Who should decide how AI is used?
Altman said that many of the questions in the AMA session touched on the issue of whether AI efforts should be nationalized. The OpenAI CEO noted, “It has seemed to me for a long time it might be better if building AGI [artificial general intelligence] were a government project.” But he added, “It doesn’t seem super likely on current trajectory.”
Altman also said he was surprised by how many of OpenAI’s critics seemed to have more faith in unelected tech executives making decisions about the appropriate use of AI rather than government officials who were, at least in theory, accountable to Congress and ultimately voters.
“I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the Constitution. I am terrified of a world where AI companies act like they have more power than the government,” Altman said on X. “I would also be terrified of a world where our government decided mass domestic surveillance was okay.”
This story was originally featured on Fortune.com