<div class="media-with-label__label">
Credit: Frontpage / Shutterstock </div>
</figure>
</div>
</div>
</div>
</div>
Anthropic seeks to revise its AI contract with the U.S. Department of Defense (DoD). Reports from the Financial Times indicate that CEO Dario Amodei has engaged in discussions with Emil Michael, the U.S. under-secretary of defense for research and engineering, to address contractual disputes that resulted in the DoD classifying Anthropic as a supply-chain risk.
These disputes specifically concern clauses the DoD sought to include, which would permit the use of Anthropic’s systems for widespread domestic surveillance and the development of autonomous weapon systems—two ethical boundaries Anthropic has consistently refused to compromise.
According to sources familiar with the situation cited by Reuters, this renewed drive for renegotiation follows Amodei’s recent consultations with key investors and supporters, including Amazon, Nvidia, Lightspeed, and Iconiq, aiming to mitigate the strained relationship with the DoD.
Reuters further reported that several of Anthropic’s investors have reportedly contacted their connections in Washington to advocate on behalf of the AI model provider.
In a related development, the Information Technology Industry Council (ITI), a prominent industry association whose members include major tech firms such as Amazon, Nvidia, Apple, and OpenAI, conveyed its apprehension to U.S. Defense Secretary Pete Hegseth. The letter expressed concern that the department would designate an American company as a supply-chain risk over what is essentially a procurement disagreement.
Reuters indicated that the letter went further, implying that such a measure could potentially hinder the government’s ability to access leading technologies from American companies that supply various federal agencies.
Instead, the Council proposed that the department pursue ongoing negotiations to resolve the matter or opt for an alternative vendor, emphasizing that the “supply-chain risk” designation is typically reserved for entities identified as foreign adversaries.
Contractual and technical compromises could save the deal
Despite heightened tensions, analysts and legal specialists suggest that Amodei’s recent efforts to renegotiate the contract could present an opportunity for mutual agreement.
“A viable compromise would be based on data origin and use, permitting bulk analysis of foreign signals intelligence while contractually prohibiting Anthropic’s systems from processing commercially obtained data of U.S. citizens without an Article III warrant, all supported by explicit covenants and robust oversight mechanisms,” said Anandaday Misshra, managing partner at Amlegals, a law firm focused on AI regulatory intelligence and data protection.
Misshra pointed out that this approach could be effective, given that the central point of contention revolves around the acquisition of “bulk” data.
Misshra further elaborated that while the DoD desires an “all lawful purposes” standard, encompassing the “analysis of bulk acquired data”—which practically includes vast quantities of commercially available information (CAI) like U.S. citizens’ location data bought without warrants under the Third‑Party Doctrine—Anthropic justifiably considers model‑driven processing of such data to be, in effect, domestic surveillance.
In agreement with Misshra, Greyhound Research chief analyst Sanchit Vir Gogia suggested that the contract’s wording could incorporate clauses for governance mechanisms. These would ensure measurable oversight through immutable audit logs detailing prompts and outputs, alongside regular compliance reviews to assess model usage within operational systems.
Analysts also foresee a potential resolution stemming from the ultimate deployment strategy of Anthropic’s models within the DoD.
Gogia indicated that a viable compromise might involve deploying customized versions of frontier models within strictly controlled environments, tailored for specific national security operations.
Gogia further proposed additional controls, such as implementing policy enforcement at a gateway layer where requests would undergo identity verifications, role-based permissions, and predetermined rules prior to reaching the model. This approach would enable Anthropic to uphold its safeguards while governments retain operational oversight.
Correspondingly, Pareekh Jain, principal analyst at Pareekh Consulting, commented that the technical deployment architecture might incorporate third-party Red Teams. These teams could conduct periodic assessments to ensure the continued effectiveness of implemented policies and safeguards as the models evolve.
OpenAI is renegotiating as well
OpenAI, having promptly secured a contract with the DoD following Anthropic’s effective exclusion last week, is similarly seeking to renegotiate the terms of its agreement.
Sam Altman, OpenAI’s CEO, announced in a post on X that the agreement was “rushed” and required revision, a decision prompted by online backlash and reports of users uninstalling ChatGPT.
Previously, OpenAI released a blog post asserting that its arrangement with the DoD incorporated contractual stipulations prohibiting the use of its models for weapon systems or extensive domestic surveillance within the U.S. This positioned their agreement as more stringent than the one being debated with Anthropic.
In an ongoing effort to manage public perception surrounding this complex situation, OpenAI has strived to highlight that its internal safeguards are consistent with those upheld by Anthropic.
Reuters reported that Connie LaRossa, OpenAI’s national policy executive, informed conference attendees in California on Wednesday that her company upholds the same ethical red lines as Anthropic and is actively supporting efforts to rescind Anthropic’s supply-chain risk designation.
Advantage Anthropic?
Nevertheless, should Anthropic and the DoD fail to finalize an agreement, the legal leverage could, according to Misshra, firmly rest with Anthropic.
“Anthropic possesses significant legal leverage. An engagement of this magnitude, valued at $200 million, is likely structured as an Other Transaction Authority (OTA) agreement, specifically designed to safeguard commercial terms, including Terms of Service and Acceptable Use Policies,” Misshra said.
“The government cannot simply introduce Federal Acquisition Regulation ‘Changes’ mechanisms to unilaterally alter those terms without risking a material breach,” Misshra remarked, further suggesting that such an action could be construed as statutory overreach.
Misshra clarified, “Under the Administrative Procedure Act (APA), the government bears the burden of proving that Anthropic genuinely presents a national security risk. Merely declining a contract clause related to domestic surveillance does not satisfy that threshold.”
He added that Anthropic’s board, operating as a Delaware Public Benefit Corporation, has a statutory obligation to promote its declared public benefit of AI safety: “Authorizing essentially unrestricted military application, particularly for the surveillance of U.S. citizens, would be challenging to reconcile with that specific duty.”
Capitulation could set a risky precedent for AI vendors
Analysts and experts contend that if Anthropic were to accede to the DoD’s demands, particularly after Amodei’s consistent public stance against violating his company’s ethical boundaries, it could establish a hazardous precedent for both Anthropic and its industry counterparts.
Misshra stated, “Should Anthropic capitulate, it would establish a precedent implying that commercial Acceptable Use Policies are, in essence, negotiable under governmental duress. The DoD could then be perceived as free to initially agree to ethical safeguards to gain access to cutting-edge capabilities, only to later employ mechanisms like supply‑chain risk designations to eliminate those very limitations.”
Misshra further commented, “Such a dynamic would foster a race to the bottom, favoring contractors prepared to disregard their internal safety and human rights policies. For an organization explicitly founded on the principles of responsible AI, contributing to the establishment of such a precedent carries both significant strategic and legal risks.”
Jain suggested that yielding to the U.S. government’s demands could also jeopardize Anthropic’s brand reputation and diminish its credibility among independent users and developers, many of whom recently switched from ChatGPT partly due to their confidence in Anthropic’s declared values.
Commercially, Jain continued, granting full concessions to the U.S. DoD without robust governance provisions could damage Anthropic’s standing in the enterprise market, particularly with European clients who exhibit growing sensitivity towards military AI involvement.
Indeed, the European Policy Centre, an autonomous think tank, has already voiced concerns regarding the potential consequences for European citizens as artificial intelligence becomes more thoroughly integrated into surveillance frameworks and military applications.
In a blog post directed at European Union policymakers, the think tank highlighted a recent resolution passed by the United Nations General Assembly, which urges states to guarantee human oversight and accountability in the creation and deployment of military AI systems.
This resolution implores governments to establish safeguards ensuring that AI-powered systems utilized in defense or security settings adhere to international law, encompassing humanitarian and human rights commitments.
Jitse Goutbeek, an AI Fellow with the Europe’s Political Economy team at the EPC, articulated that such international obligations gain particular significance as governments commence integrating advanced AI models into intelligence, surveillance, and defense strategic planning.
Furthermore, Goutbeek contended that procurement choices and defense collaborations ought to increasingly reflect these commitments. He suggested that European governments might require more explicit guarantees from technology providers and defense agencies regarding the preservation of human oversight and operational safeguards when AI systems are deployed in critical national security contexts.