Reports indicate Defense Secretary Hegseth has threatened Anthropic, demanding the military be granted unrestricted access to its AI.
A widening disagreement between the U.S. Department of Defense (DoD) and Anthropic regarding military AI usage has resulted in a stark ultimatum from Defense Secretary Pete Hegseth: collaborate on the DoD’s terms or face exclusion from Pentagon initiatives.
According to reports from the news outlet Axios, Hegseth reportedly set a deadline of Friday, February 27, for Anthropic to agree to the DoD’s conditions during a tense encounter this week. Should no consensus be reached, the company risks being classified as a “supply chain risk.” Hegseth reportedly even threatened to invoke the Defense Production Act, a Cold War-era measure, to force their cooperation, the report detailed.
The DoD maintains that it should be free to deploy Anthropic’s AI for “all lawful purposes,” irrespective of any ethical constraints the company itself might impose. Anthropic, conversely, seeks to establish more stringent safeguards.
“The Department of War’s [DoD’s] relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” Chief Pentagon Spokesman Sean Parnell informed Semafor last week.
This extraordinary impasse appears to stem from a series of discussions between Anthropic and DoD officials that have steadily increased in tension. Among these is an account that Anthropic CEO Dario Amodei had insisted on the DoD adhering to specific limitations the company had imposed on its AI’s use in particular military scenarios.
The situation escalated in early January after the U.S. military employed Anthropic’s Claude LLM, combined with Palantir’s technology, to assist in planning and executing the operation to apprehend former Venezuelan president, Nicolás Maduro.
Anthropic personnel reportedly raised internal concerns about whether an operation resulting in dozens of fatalities aligned with the safeguards established for Claude as part of its recently revamped safety and ethics Constitution.
Even so, the precise details of Anthropic’s restrictions are often vague. Its September 2025 Acceptable Use Policy (AUP), for instance, outlines certain prohibitions, such as preventing its use for mass domestic surveillance, compromising critical infrastructure, or designing/developing weaponry.
Beyond this, Amodei himself has referenced limitations in various statements and writings, or through reported dialogues with officials exploring hypothetical scenarios. This includes his recent advocacy for greater AI regulation: “I think I’m deeply uncomfortable with these decisions [on AI] being made by a few companies, by a few people,” Amodei commented to the CBS News TV newsmagazine 60 Minutes in November. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”
Supply chain implications
Should Hegseth follow through on his threat to ban Anthropic, it would have significant repercussions for the DoD and its extensive supply chain. Theoretically, companies within the wider Defense Industrial Base (DIB) would be required to cease using Anthropic’s AI platform in all its forms, including, presumably, the Claude Code Security cyber system introduced just this week.
Such an outcome appears highly improbable. Banning an American company would set an unprecedented example; previous actions of this nature have been reserved for a limited number of foreign entities. Furthermore, Anthropic is currently one of only two advanced AI models that has attained Impact Level 6 (IL6) certification for deployment on classified networks, with xAI’s Grok having just joined it this week.
Completely removing Anthropic is inconceivable, especially given its deep integration with Palantir’s systems, which are themselves vital to the DoD. More plausibly, the DoD will simply coerce Anthropic into making concessions by invoking the Defense Production Act, though this approach might undermine long-term cooperation.
Alternatively, Anthropic may continue to insist on certain restrictions, such as those concerning the use of Claude for autonomous weapons, while the DoD might proceed as if these limitations will eventually be eased, leading to a temporary, uneasy understanding.
The conflict between the DoD and Anthropic mirrors the dispute between the FBI and Apple over a decade ago regarding access to iPhones following the 2015 San Bernardino mass shooting. In that instance, Apple refused to yield, resulting in a protracted legal battle. The current U.S. administration, however, appears to have less patience for such resistance.
This article was originally published on CIO.com.