
(NEW YORK) — OpenAI CEO Sam Altman told employees at an all-hands meeting that the company doesn’t “get to make operational decisions” about how its artificial intelligence technology is used by the Pentagon, according to a source familiar with the meeting.
“So maybe you think the Iran strike was good and the Venezuela invasion was bad,” Altman said in Tuesday’s meeting, according to the source. “You don’t get to weigh in on that.”
The comments came days after OpenAI announced they had reached an agreement with the Pentagon to deploy their models on their classified network, hours after the deal between Anthropic and the Pentagon fell apart.
OpenAI is best known as the company behind generative AI chatbot ChatGPT, while Anthropic is responsible for the chatbot Claude.
At the center of the fight between Anthropic and the Department of Defense is the question of who gets to control how AI is used by the military: the companies that make the technology or the government that deploys it?
Anthropic was the first AI company to be used on classified networks and its technology is widely considered the most advanced. The talks fell apart over Anthropic’s red lines: they were against their models being used for fully autonomous weapons or mass surveillance of Americans. The Pentagon argued they needed its technology for all lawful use cases.
The department, which was informally renamed as the Department of War via executive order last year, addressed the red lines in a social media post last week.
“The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement,” spokesperson Sean Parnell wrote. “Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes. This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk.”
The Pentagon set a deadline of 5 p.m. last Friday for Anthropic to acquiesce to its demands or be essentially blacklisted. With negotiations at an impasse, Trump ordered the government to stop using the company’s products and Defense Secretary Pete Hegseth declared Anthropic would be designated a “supply chain risk”, essentially cutting the American company off from government work.
According to a source, Anthropic still has not received a notification from the government about being designated a supply chain risk, outside of Hegseth’s tweet announcing it.
The breakdown in talks came hours before the U.S. launched strikes in Iran. According to multiple reports, Anthropic’s AI models were used for the U.S. operation in Iran.
Anthropic is not commenting on those reports. In response, a Pentagon spokesperson tells ABC: “The Department declines to comment citing operational security.”
When OpenAI announced its deal with the Pentagon, Altman said it shared the same red lines as Anthropic.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he said in a statement. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Days later, amid an onslaught of criticism, Altman said in a post this week that the company “shouldn’t have rushed” its deal with the Pentagon, saying that “it just looked opportunistic and sloppy.”
Altman unveiled an adjusted agreement with the Pentagon that he says provides stronger guarantees that the military won’t use OpenAI’s systems for domestic surveillance.
“We are going to amend our deal to add this language, in addition to everything else: ‘Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,'” he wrote in a statement.
“There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods,” he added.
OpenAI says they believe their contract has even “better guarantees” than what Anthropic had originally signed with the Pentagon.
“This language makes explicit that our tools will not be used to conduct domestic surveillance of U.S. persons, including through the procurement or use of commercially acquired personal or identifiable information,” the company wrote in a statement. “The Department also affirmed that our services will not be used by Department of War intelligence agencies like the NSA. Any services to those agencies would require a new agreement.”
Copyright © 2026, ABC Audio. All rights reserved.


