The Pentagon’s actions could jeopardize the firm’s standing within the US military-industrial complex.
A growing dispute between the US Department of Defense and AI firm Anthropic could spell trouble for the latter. According to the political website, The Hill, the DoD is currently scrutinizing the terms of its engagement with the company.
The root of the disagreement lies in Anthropic’s ethical stance: it insists that its Claude AI model not be employed in creating autonomous weapons systems (those firing without human control) nor for widespread surveillance of US citizens.
However, this principled position has not been well-received by the Pentagon. Chief Pentagon spokesperson Sean Parnell stated, “The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
Anthropic’s situation could escalate further. As reported by Axios, the DoD is contemplating labeling Anthropic a “supply chain risk.” This designation could compel other companies to avoid partnering with Anthropic or risk forfeiting their own defense contracts, a move that would significantly hinder the company’s growth strategies.