Caitlin Kalinowski stated that safeguards for domestic surveillance and lethal autonomous systems required more thorough discussion than they received.
Caitlin Kalinowski, OpenAI’s robotics chief, has resigned due to the company’s contract with the US Department of War. She cited inadequate review of crucial safeguards concerning domestic surveillance and autonomous weaponry prior to the agreement’s signing.
In a LinkedIn post, Kalinowski articulated, “Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are boundaries that warranted more careful consideration than they received.” She clarified that her decision was “about principle, not people,” expressing continued “deep respect” for CEO Sam Altman and her team.
Kalinowski’s departure marks the highest-profile public dissent from an OpenAI employee regarding the Pentagon contract. This follows an open letter signed by hundreds of OpenAI and Google staff, urging their companies to uphold restrictions on AI applications for mass surveillance and autonomous weapons.
Key Implications
According to Sanchit Vir Gogia, chief analyst at Greyhound Research, this resignation indicates a fundamental governance process issue for enterprise buyers, one that simple contract revisions cannot fix.
Gogia explained, “When a senior executive departs, citing inadequate discussion on surveillance protections or lethal autonomous systems, the focus promptly moves from individuals to the procedural aspects.” He added, “In mature governance structures, such agreements typically undergo several review stages before finalization. If these discussions persist post-contract announcement, businesses often perceive it as an indication that governance frameworks are still underdeveloped.”
Gogia also observed that this incident highlights a wider change in how companies assess AI vendors. He stated, “Organizations are now evaluating multiple AI model providers instead of committing to a single one, and they are including stricter governance documentation requirements in vendor contracts before authorizing major deployments.”
Abhishek Sengupta, vice president at Everest Group, believes that the public’s negative reaction – which saw Anthropic’s Claude rise in Apple’s App Store as ChatGPT users uninstalled the app – won’t necessarily lead to enterprise customer attrition. He commented, “Public sentiment tends to be reactive and not lasting.” Sengupta added, “Enterprise decision-makers now face an additional risk factor when considering their AI infrastructure. National security directives are likely to increasingly influence AI procurement decisions, particularly for economies with geopolitical significance.”
The Agreement and Opposition
On February 27, OpenAI formalized its agreement with the Pentagon, occurring merely hours after the Department of War classified competitor Anthropic as a supply-chain risk due to its policy against using its models for domestic mass surveillance or fully autonomous weapons. That evening, Altman announced the deal, asserting that the Department of War had accepted OpenAI’s strict limits regarding surveillance and autonomous armaments. However, by Monday, he admitted the situation was poorly managed, commenting on X that it “looked opportunistic and sloppy.”
Following public scrutiny, OpenAI amended the contract to explicitly ban domestic surveillance of US citizens and broadened this prohibition to encompass commercially obtained data, such as geolocation and browsing history. An updated statement on OpenAI’s website confirmed that intelligence bodies like the NSA were not covered by the agreement.
Legal specialists remained skeptical. The Electronic Frontier Foundation (EFF), a digital rights organization, cautioned that “secret agreements and technical assurances have never been sufficient to control surveillance agencies.”
Gogia noted, “When a technology provider modifies a government contract due to public criticism, enterprise clients seldom view the amendment itself as adequate reassurance.” He added, “Risk assessment teams then begin scrutinizing how these commitments are enacted, who is responsible for enforcement, and the implications if interpretations evolve over time.”
Anthropic Resumes Discussions
The wider disagreement between the Pentagon and the AI sector persists. Anthropic CEO Dario Amodei has re-engaged in discussions with the Pentagon. The Information Technology Industry Council, which includes Apple, Google, Nvidia, and OpenAI, has written to Secretary Hegseth of the Department of War. Their letter contends that classifying Anthropic as a supply-chain risk is typically reserved for foreign adversaries and could limit government access to prominent American technology.
Altman has publicly supported Anthropic’s bid for reinstatement. He posted on X, stating, “Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to.”