Anthropic isn’t holding back in its challenge against the US Department of Defense (DoD), as tensions grow over whether AI companies should be compelled to support military uses of their technology – particularly autonomous weapons and large-scale surveillance systems.
The dispute is centred on a controversial move by the Pentagon to label Anthropic a “supply chain risk”, a designation that could restrict how government contractors work with the company. Anthropic is now challenging that decision in court, arguing that the label is unjustified and could have wider implications for the AI industry.
The case is quickly becoming one of the most significant clashes yet between Silicon Valley’s growing AI sector and government defence priorities. And fortuitously, it’s happening while the US is knee deep in a new wave of conflict in the Middle East.
Why Did The Pentagon Label Anthropic A Risk?
The Pentagon’s designation stems from supposed concerns that Anthropic’s policies limiting how its AI models can be used might make it difficult for defence agencies and contractors to rely on the company’s technology.
Anthropic, however, has taken a clear stance against allowing its models to power fully autonomous weapons or large-scale surveillance systems targeting civilians. While the company has said it’s open to working with governments in certain contexts, it’s drawn firm ethical boundaries around these uses, and Dario Amodei has been quite clear about where the company stands.
The Department of Defense appears to view those restrictions as a potential operational risk, especially for projects where flexible deployment of AI systems could be required.
If the label stands, it could discourage defence contractors from using Anthropic’s technology, even outside direct military applicationsm due to procurement rules and compliance risks.
More from Artificial Intelligence
This Legal Battle Has Potentially Far Wider Implications
Legal experts say Anthropic faces an uphill battle in court. National security decisions often receive significant deference from judges, particularly when the executive branch argues that procurement decisions are tied to defence strategy.
Mike Litvinenko, CEO and Founder of Eximion, believes that the most realistic outcome may not be a complete reversal of the Pentagon’s decision but rather a narrowing of its scope.
“As a Vertical AI founder at Eximion, I don’t expect Anthropic to fully overturn the Pentagon’s supply chain risk label because the DoD can frame it as a national security procurement judgment, and judges tend to defer to that,” he says.
However, he suggests Anthropic could still achieve a partial victory if courts limit how broadly the designation can be applied.
“I think the more realistic win for Anthropic is to limit how far that label travels,” Litvinenko explains, noting that the company has argued the designation should apply only to direct defence use rather than the wider contractor ecosystem.
Are They Busy Setting A Precedent For The AI Industry?
Beyond the legal specifics, the dispute highlights a growing tension between AI companies setting ethical limits and governments seeking greater access to powerful emerging technologies.
Kenneth Eade, AMZ Sellers Attorney, says the Pentagon’s move could raise serious legal questions. The law used to designate Anthropic as a supply chain risk has historically been applied to foreign entities rather than domestic companies.
“Under §3252 the DOD must show the risk involves an adversary attempting to sabotage or spy on systems,” Eade explains. “Because Anthropic is a domestic partner that was engaged in negotiations with the government, it is not likely they will satisfy this burden.”
Eade also warns the case could create a concerning precedent if governments are able to pressure AI companies into abandoning ethical restrictions.
“I am afraid it will force them to bend to the government’s will rather than acting responsibly,” he says, adding that weapons systems “must have a human eye on them at all times and must not work autonomously.”
A Bigger Debate About AI Power
The dispute also raises broader questions about how AI infrastructure is controlled.
David Sherman, Head of Brand Strategy at io.net, argues that the situation highlights the risks of centralised AI infrastructure – where governments can exert pressure on companies that control key computing resources.
“Whether Anthropic wins this case or not, the real story is the same: when a few big companies control AI infrastructure, they become easy targets for pressure, and everyone else gets caught in the fallout,” he says.
According to Sherman, decentralised AI infrastructure could reduce this leverage by distributing computing power across multiple providers.
The Future Of AI And Defence
Regardless of the court’s ruling, the case signals a turning point in the relationship between AI companies and defence agencies.
As governments increasingly look to AI for military applications, tech firms will face growing pressure to decide where they draw ethical lines and whether they are willing to defend them in court.
Anthropic’s challenge may not only shape the company’s future but also help determine how much control governments can exert over the rapidly evolving AI industry.
Our Experts:
- Mike Litvinenko: CEO & Founder, Eximion
- David Sherman: Head of Brand Strategy at io.net
- Kenneth Eade: AMZ Sellers Attorney
- Edward Tian: CEO at GPTZero
- Andrew Gamino-Cheong: CTO and Co-Founder at Trustible
Mike Litvinenko, CEO and Founder, Eximion
![]()
“As a Vertical AI founder at Eximion, I don’t expect Anthropic to fully overturn the Pentagon’s supply chain risk label because the DoD can frame it as a national security procurement judgment, and judges tend to defer to that. The Pentagon also has a clean argument that Anthropic’s use restrictions create operational risk for defense work that needs flexible deployment.
“I think the more realistic win for Anthropic is to limit how far that label travels. Amodei has argued the designation should apply narrowly to direct Defense use, not spill over into the wider contractor ecosystem. If a court finds the criteria were applied too broadly or without a clear, consistent record, it could require the Pentagon to justify the designation more tightly or limit how it is used. That would not erase the label, but it would reduce the chilling effect on partners and customers.”
David Sherman, Head of Brand Strategy at io.net
![]()
“Whether Anthropic wins this case or not, the real story is the same: when a few big companies control AI infrastructure, they become easy targets for pressure – and everyone else gets caught in the fallout. If Anthropic wins, they’ve still spent months fighting a battle that only exists because the Pentagon can use their centralised compute as a leverage point. If they lose, we’ve just watched the US government label an AI company a ‘supply-chain risk’ to force their hand – and every other provider is now on notice.
“Either way, the problem isn’t the court decision. It’s that AI today runs on centralised infrastructure, which means developers and businesses are dependent on whoever controls the servers – and vulnerable to exactly this kind of pressure. This is why the conversation around decentralised AI infrastructure is so important. Distributed compute removes the chokepoint. No single entity – government or corporation – gets to use infrastructure access as a bargaining chip. The more the industry moves in that direction, the less these power struggles dictate AI’s future.”
Kenneth Eade, AMZ Sellers Attorney
“What outcome is most likely in this legal dispute?
Courts usually are split on things involving national security decisions by the executive brand, this appears to be misuse of 10 USC §3252 (designation as a supply chain risk) which has formerly been used against only against foreign entities. This is the first time it has been used against a domestic entity. Secondly, it looks as if the president is trying to preclude a company from exercising ethical responsibility because they did so to defy him. This is a dangerous precedent to set for the exercise of presidential power.
Could Anthropic successfully challenge the Pentagon’s designation?
They could. Under §3252 the DOD must show the risk involves an adversary attempting to sabotage or spy on systems. Because Anthropic is a domestic partner that was engaged in negotiations with the government, it is not likely they will satisfy this burden.
What might this case mean for the future relationship between AI companies and defense agencies?
I am afraid it will force them to bend to the government’s will rather than acting responsibly. Weapons systems must have a human eye on them at all times and must not work autonomously. This is what saved the world from nuclear holocaust in the 1960’s–one Soviet officer who refused to follow a launch order.
How could the ruling shape regulation and ethical norms around AI in warfare?
There are no ethical norms in this area. It is under development, which makes the government’s attitude so scary. There should not be autonomous weapons under any circumstances.”
Edward Tian, CEO at GPTZero
![]()
“I ultimately expect that DoD procurement decisions when acquiring technology will be made based on practical operational certainty rather than based on legal opinions. Defense procurement processes are mainly structured around both supply chain assurance and deployability. If an entity’s restrictions create ambiguity concerning how AI systems will operate within classified environments, then agencies will typically it will be unreasonable to implement that vendor’s AI systems and will look towards alternative vendors that provide unambiguous operational rights.
“Anthropic may pursue a legal challenge to this designation, but the much larger issue concerns the requirement of setting a precedent for Government AI vendors. This will, in the future create increased restrictions or requirements for AI vendors to provide a certain level of oversight, contractual availability and flexibility in deploying their AI systems within the context of a Defense procurement context.
“For the overall AI industry, the Anthropic case exemplifies the differentiation that may occur between what AI vendors are agreed to contractually on AI governance assurances and the contractual requirements an AI vendor must satisfy in order to perform within the context of a Defense procurement process.”
Andrew Gamino-Cheong, CTO and Co-Founder at Trustible
![]()
“According to some reports, the formal ‘supply chain risk’ notice was narrow in scope, and will only prohibit Anthropic from being used in direct support of DoD offerings. Microsoft put out a statement backing that up, and Microsoft’s support of Anthropic in this will matter.
“Anthropic will have a decent chance at appealing the designation partly because the DoD didn’t go through their normal process for assessing the risk this time, and they’ll have to admit their own procurement due diligence was flawed for having purchased Claude the first time.
“Many startups and AI companies are going to hesitate to do business with the federal government as a result of this work. There’s already been challenges with uneven government funding, loss of procurement and AI expertise in government, and now a large political factor as well. The private sector AI market is large enough where most AI companies will now see the public sector as a bigger risk than the private sector.”


