Anthropic Tells the Pentagon to Take a Hike

Steven Vaughan-Nichols
8 Min Read

AI leader Anthropic stands firm on moral boundaries it won’t allow its technology to breach, a principled stance seemingly not shared by rivals like OpenAI.

Sign post reading, Danger Slippery Slope. Warning sign near seafront on overcast day.
                                        <div class="media-with-label__label">
                        Credit:                                                             P.Cartwright / Shutterstock                                                 </div>
                                </figure>
        </div>
                                        </div>
                        </div>
                    </div>

Recently, major AI firm Anthropic has been engaged in a significant dispute with the Trump administration’s Department of Defense (DoD). The Pentagon sought to enforce new standard contractual terms on AI providers, with Defense Secretary Pete Hegseth pushing for language that would grant the military “any lawful use” of Anthropic’s AI models. This demand aimed to override the company’s existing restrictions on specific military and domestic deployments of its technology.

Hegseth’s interpretation of “lawful” suggested the DoD would have carte blanche to use the AI for nearly any purpose, including extensive domestic surveillance and autonomous weaponry.

Such a scenario might bring to mind the genesis of a war between machines and humanity, a concern shared by many. Yet, caution appears to be absent from Hegseth’s vocabulary. In contrast, Anthropic CEO Dario Amodei is profoundly conscious of the tangible dangers posed by AI, extending beyond mere sci-fi tropes.

Despite this, Hegseth convened Amodei, insisting that Anthropic’s AI be deployed precisely as the DoD wished. He threatened to revoke the company’s current $200 million contract and bar them from future AI agreements if they refused. Anthropic was given a deadline of 5 p.m. yesterday to comply.

Amodei, however, remained resolute.

He made a public declaration that the company would rather forfeit its collaboration with the DoD than relinquish the contractual protections designed to prevent its AI from facilitating widespread domestic surveillance or serving as fully autonomous weapons.

Amodei does not oppose leveraging AI for national defense; in fact, he supports it. However, he emphasized that “using these systems for mass domestic surveillance is incompatible with democratic values,” adding that “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”

Furthermore, he stated, “frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of [Defense] on R&D to improve the reliability of these systems, but they have not accepted this offer.”

Amodei also pointed out that these specific use cases “have never been included in our contracts with the Department of [Defense], and we believe they should not be included now.”

The Pentagon maintained its forceful approach, characterizing its demands as an ultimatum and instructing Anthropic to present its “final offer” yesterday. Nevertheless, Anthropic declined the DoD’s proposition, stating it “cannot, in good conscience,” accept such expansive terms.

It should be noted that Anthropic is not a “woke, liberal company,” despite how it’s being portrayed by certain pro-Trump factions. Quite the contrary! The National Review highlighted that “Amodei is just about the opposite of a dove” concerning AI’s military uses. For instance, the Trump administration leveraged Anthropic’s Claude in January to apprehend former Venezuelan President Nicolás Maduro.

Anthropic’s opposition to employing AI for internal surveillance and autonomous weapons stems less from political alignment and more from a pragmatic understanding of the inherent risks associated with relying on nascent, unrestricted AI.

Advocacy organizations for civil liberties, such as the Electronic Frontier Foundation (EFF), have encouraged Anthropic to stand firm. They perceive the Pentagon’s insistence as an effort to coerce technology companies into developing instruments for extensive surveillance and automated combat. Internally, Anthropic’s workforce has publicly supported their leadership’s position, viewing the confrontation as a clear demonstration of the company’s core pledge to prevent advanced AI from being used for highly destabilizing military applications.

Anthropic’s position garnered broader support. Employees from Alphabet, Amazon, and Microsoft declared their solidarity with the company. Concurrently, hundreds of staff at Google and OpenAI jointly penned an open letter, urging their respective firms to uphold Anthropic’s principles against mass surveillance and fully autonomous weapons. Their message expressed a desire for their leadership to unite in rejecting the Pentagon’s existing demands.

In contrast, Donald Trump issued a furious statement late yesterday: “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.”

Federal agencies have been granted a six-month period to switch to alternative technological solutions.

Not all political conservatives opposed Anthropic’s stance. For example, Retired General Jack Shanahan, who previously navigated a military-AI dispute involving Project Maven and Google, sided against Trump. He commented: “Despite the hype, frontier models are not ready for prime time in national security settings. Over-reliance on them at this stage is a recipe for catastrophe. Mass surveillance of US citizens? No thanks.”

Despite these developments, other AI companies continued to engage with the Defense Department. In an internal memorandum, OpenAI CEO Sam Altman articulated: “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

Altman further indicated that OpenAI was receptive to striking “a deal with the DoW” to permit the deployment of their models in classified settings. This struck me as typical equivocation, and indeed, by yesterday evening, OpenAI had reached an agreement with the Defense Department.

It’s evident that OpenAI’s insatiable demand for revenue, necessary to fund its vast capital expenditures, drove its executives to make what some might call a Faustian bargain. (Yes, I’m aware Altman mentioned safeguards and protections, but consider one crucial aspect: hallucinations.)

Regrettably, had OpenAI not entered this agreement, another company would undoubtedly have stepped in. Consequently, should AI-powered autonomous drones deploy bombs on the residences of suspected undocumented individuals in Minneapolis or any other global location by 2028, the responsibility will be clear – though such clarity may offer little solace at that point.

The uncontrolled embrace of AI for military applications must be halted immediately to prevent the dystopian vision of “Terminator wars” from transitioning from science fiction to grim reality.

Generative AIArtificial IntelligenceGovernmentMarketsIndustry
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *