Legal experts are raising concerns that OpenAI’s agreement with the U.S. Department of Defense imposes minimal restrictions beyond “all lawful use,” casting doubt on the true enforceability of its AI safety measures.
OpenAI announced a new agreement to provide AI services to the U.S. government, a development that came mere hours after U.S. President Donald Trump’s Friday decision to prohibit its competitor, Anthropic, from all federal government contracts.
On Sunday, OpenAI CEO Sam Altman commented on the quick negotiations via a post on X, stating, “It was definitely rushed, and the optics don’t look good,”
Anthropic’s ban stemmed from its refusal to permit its technology for mass surveillance of U.S. citizens or for use in fully autonomous weapons. OpenAI claimed its own deal included identical restrictions, sparking inquiries into how it purportedly secured such concessions so rapidly, or indeed, if these safeguards were genuinely implemented.
The U.S. government initially sought Anthropic’s agreement to use its AI for all legal purposes. Anthropic conceded, with the notable exception of those two specific areas, leading to its ban on Friday due to its firm stance.
In a blog post on Saturday, OpenAI urged the government to extend similar contractual terms to other AI developers, asserting, “We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”
Core Restrictions
The company outlined, “We operate with three primary ‘red lines’ guiding our engagement with the DoW, shared broadly across several leading AI research laboratories,” referring to the Department of Defense by its historical name, the Department of War.
These crucial boundaries, as specified, prohibit the use of its technology for mass domestic surveillance, for directing autonomous weapons systems, and for making critical automated decisions such as those in “social credit” programs.
This raises the question: Did OpenAI truly succeed where Anthropic faltered? And if so, what was its method?
OpenAI affirmed that its “red lines” are upheld through “a more expansive, multi-layered approach,” as detailed in its Saturday blog post. It noted, “We maintain full autonomy over our safety infrastructure, implement deployment through cloud services, ensure cleared OpenAI personnel are actively involved, and benefit from robust contractual safeguards. These measures supplement the strong protections already enshrined in U.S. law.”
The company highlighted a section of the contract outlining these protections: “The Department of War is authorized to utilize the AI System for all legitimate purposes, adhering to applicable law, operational necessities, and established safety and oversight protocols. The AI System shall not be employed to independently command autonomous weapons where human control is mandated by law, regulation, or Department policy, nor will it undertake other critical decisions requiring human approval under the same legal frameworks.”
This phrasing, however, restricts OpenAI’s technology in autonomous weapons only when legally or regulatorily prohibited, offering no safeguards when such laws and regulations are absent.
“It doesn’t surprise me that OpenAI acceded to the DoW’s demands. What is somewhat surprising is how many observers seem to believe that the contract excerpt published by OpenAI offers protections significantly different from ‘all lawful use,’” remarked Charlie Bullock, a senior research fellow at the independent Institute for Law and AI think tank, in a recent post on X.
Regarding mass domestic surveillance, OpenAI cited contract provisions stating, “For intelligence operations, all handling of private data must conform to the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and relevant DoD directives mandating a clear foreign intelligence objective. The AI System is prohibited from unconstrained monitoring of U.S. persons’ private information, in accordance with these authorities. Furthermore, the system shall not be utilized for domestic law-enforcement activities except as sanctioned by the Posse Comitatus Act and other applicable statutes.”
Doubts About OpenAI’s AI Guardrails
Nonetheless, legal professionals remain unconvinced that the contractual wording quoted by OpenAI in its blog post is sufficient to prevent the DoD from engaging in mass domestic surveillance within the U.S.
The contract language released by OpenAI, according to Pranesh Prakash, principal consultant at the law and policy advisory firm Anekaanta, permits agencies such as the NSA, which operates under the DoD, to conduct bulk domestic metadata collection using existing legal authorities.
Bullock noted that he is unaware of a precise legal definition for mass domestic surveillance, suggesting that a sufficiently advanced AI system could perform actions that are legal yet still fall under reasonable interpretations of mass domestic surveillance.
Bullock also pointed out additional ambiguities, explaining that the legal landscape remains unclear, largely because OpenAI has not released the complete contract.
If the contract explicitly restricts deployment related to surveillance or weapons, then Anandaday Misshra, founder of Amlegals, a law firm specializing in AI regulatory intelligence and data protection, stated, “OpenAI can invoke standard contract law principles to enforce those limitations, at least between the contracting parties.”
However, Misshra cautioned that once the U.S. government becomes involved, especially through an entity like the DoD, OpenAI’s capacity to prohibit certain uses diminishes significantly. He explained, “Provisions for national security, classified operations, and doctrines of sovereign immunity substantially weaken any efforts to challenge governmental use solely on ethical grounds. Courts have historically shown deference when national security is invoked, even when private contractors voice objections.”
He added that there have been “comparable tensions” in past conflicts involving telecommunications surveillance and defense contracting, where companies relied on contractual terms and internal oversight but ultimately “had limited influence once federal authorities exercised their statutory powers.”
To OpenAI’s disadvantage, there is no clear precedent where a technology provider successfully prevented the federal government from utilizing a tool for security or defense purposes once contractual access was granted, Misshra observed. He further noted that national security exceptions would likely override softer commitments regarding ethical AI use in any direct conflict.
Furthermore, current U.S. data protection regulations do not support OpenAI’s stance, at least in their present form, according to Misshra.
“Unlike the data protection and AI frameworks in Europe, the United States currently lacks a binding federal AI law that would explicitly prohibit military or intelligence applications. This means OpenAI is largely guided by self-imposed policies rather than statutory protections,” Misshra explained.
Cloud Deployment Offers Potential Safeguards
OpenAI indicated that some technical safeguards are in place. “This is exclusively a cloud-based deployment, featuring a safety stack that we manage, incorporating these principles and others. We are not supplying the DoW with ‘guardrails off’ or unsafely trained models, nor are we deploying our models on edge devices,” the company stated in its Saturday blog post. “Our deployment architecture will allow us to independently verify adherence to these red lines, including the operation and updating of classifiers.”
Bullock commented that, given OpenAI’s emphasis on technical and operational safeguards, “It makes sense to form your judgment on their decision based on your trust in the company, its technical defenses, and the involvement of its personnel, rather than solely on the two contract paragraphs they chose to release.”
Despite the assurances of architectural control, Misshra suggests that ongoing disagreements between AI firms and the U.S. government might establish a precedent not for direct opposition, but for how technology companies negotiate safety measures.
Misshra concluded, “Future agreements will likely feature more explicit terms concerning permitted usage, auditing rights, model versioning, and liability distribution. Ethical commitments will increasingly be integrated as contractual risk management tools, moving beyond mere moral objections.”
This article originally appeared on CIO.