
Anthropic’s $200 million contract with the Department of Defense is up in the air after Anthropic reportedly raised concerns about the Pentagon’s use of its Claude AI model during the Nicolas Maduro raid in January.
“The Department of War’s relationship with Anthropic is being reviewed,” Chief Pentagon Spokesman Sean Parnell said in a statement to Fortune. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
Tensions have escalated in recent weeks after a top Anthropic official reportedly reached out to a senior Palantir executive to question how Claude was used in the raid, per The Hill. The Palantir executive interpreted the outreach as disapproval of the model’s use in the raid and forwarded details of the exchange to the Pentagon. (President Trump said the military used a “discombobulator” weapon during the raid that made enemy equipment “not work.”)
“Anthropic has not discussed the use of Claude for specific operations with the Department of War,” an Anthropic spokeperson said in a statement to Fortune. “We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”
At the center of this dispute are the contractual guardrails dictating how AI models can be used in defense operations. Anthropic CEO Dario Amodei has consistently advocated for strict limits on AI use and regulation, even admitting it becomes difficult to balance safety with profits. For months now, the company and DOD have held contentious negotiations over how Claude can be used in military operations.
Under the Defense Department contract, Anthropic won’t allow the Pentagon to use its AI models for mass surveillance of Americans or use of its technology in fully autonomous weapons. The company also banned the use of its technology in “lethal” or “kinetic” military applications. Any direct involvement in active gunfire during the Maduro raid would likely violate those terms.
Among the AI companies contracting with the government—including OpenAI, Google and xAI—Anthropic holds a lucrative position placing Claude as the only large language model authorized on the Pentagon’s classified networks.
This position was highlighted by Anthropic in a statement to Fortune. “Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy.”
The company “is committed to using frontier AI in support of US national security,” the statement read. “We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right.”
Palantir, OpenAI, Google and xAI didn’t immediately respond to a request for comment.
AI goes to war
Although the DOD has accelerated efforts to integrate AI into its operations, only xAI has granted the DOD the use of its models for “all lawful purposes,” while the others maintain usage restrictions.
Amodei has been sounding the alarms for months on user protections, offering Anthropic as a safety-first alternative to OpenAI and Google in the absence of governmental regulations. “I’m deeply uncomfortable with these decisions being made by a few companies,” he said back in November. Although it was rumored that Anthropic was planning to ease restrictions, the company now faces the possibility of being cut out of the defense industry altogether.
A senior Pentagon official told Axios Defense Secretary Pete Hegseth is “close” to removing Anthropic from the military supply chain, forcing anyone who wishes to conduct business with the military to also cut ties with the company.
“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” the senior official told the outlet.
Being deemed a military supply risk issue is a special designation usually reserved only for foreign adversaries. The closest precedent is the government’s 2019 ban on Huawei over national security concerns. In Anthropic’s case, sources told Axios that defense officials have been looking to pick a fight with the San Francisco–based company for some time.
The Pentagon’s comments are the latest in a public dispute coming to a boil. The government claims that having companies set ethical limits to its models would be unnecessarily restrictive, and the sheer number of gray areas would render the technologies futile. As the Pentagon continues to negotiate with the AI subcontractors to expand usage, the public spat becomes a proxy skirmish for who will dictate the uses of AI.
Credit: Source link












