A Principled Departure
Caitlin Kalinowski, the head of OpenAI's hardware and robotics division, resigned over the weekend in protest of the company's recently finalized agreement with the U.S. Department of Defense. In a statement posted on X, Kalinowski said the deal crossed ethical lines that deserved far more deliberation than they received.
"I resigned from OpenAI. I care deeply about the robotics team and the work we built together. This wasn't an easy call," Kalinowski wrote. "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
Kalinowski's departure marks one of the highest-profile resignations in the AI industry since the Pentagon began aggressively courting Silicon Valley firms for military applications. Her role at OpenAI involved building the hardware that would bring AI models into physical robotic systems, a field that intersects directly with defense applications.
The Pentagon-OpenAI Deal
The agreement that prompted Kalinowski's resignation would embed OpenAI's generative AI systems inside secure Defense Department computing environments. CEO Sam Altman sealed the deal in early March, using language that permits the Pentagon to use OpenAI's models for "all lawful purposes" — a phrase critics argue is deliberately vague enough to encompass domestic mass surveillance.
The backlash was swift. In the days following the announcement, ChatGPT uninstalls surged by 295 percent, according to TechCrunch data. Protesters chalked appeals on sidewalks around OpenAI's San Francisco headquarters, urging employees to consider the implications of the technology they were building.
OpenAI has maintained that it supports responsible use of AI in national security contexts and that all applications must comply with existing law. The company has not directly addressed concerns about domestic surveillance or lethal autonomous systems raised by Kalinowski and other departing staff.
Industry Fallout and the Anthropic Contrast
Kalinowski's resignation comes amid a broader reckoning in the AI industry over military partnerships. Anthropic, maker of the Claude AI system, recently refused a similar Pentagon deal specifically because it would not agree to its model being used for mass surveillance of Americans or autonomous weapons systems. The Pentagon responded by designating Anthropic a "supply chain risk," a classification normally reserved for foreign adversaries.
The divergence between the two companies illustrates a deepening fault line in Silicon Valley. OpenAI and Google employees have publicly supported Anthropic's stance, with hundreds signing open letters calling for clearer ethical guardrails around military AI applications. Some have gone further, leaving their positions to join Anthropic or other companies with more restrictive military policies.
The debate has also drawn attention from Congress, where bipartisan legislation requiring judicial oversight for AI-powered domestic surveillance is being drafted but faces significant opposition from defense hawks who argue it would handicap national security operations.
What Comes Next for OpenAI Robotics
Kalinowski's exit leaves a significant leadership vacuum in OpenAI's robotics division, which she had been building out since joining the company from Meta's Reality Labs in late 2024. The team was working on integrating multimodal AI models with physical robotic systems, an effort that had shown promising early results in manipulation and navigation tasks.
Several members of Kalinowski's team are reportedly considering their own positions in light of the Pentagon deal. The robotics division, which represents OpenAI's push beyond software into physical AI, may face recruitment challenges if the company's military partnerships continue to expand without clearer ethical boundaries.
Industry analysts note that the talent exodus could benefit competitors. Robotics researchers with concerns about military applications now have alternatives at companies like Anthropic, Google DeepMind, and a growing number of startups that have explicitly ruled out defense contracts.
The Bigger Picture
The OpenAI resignations reflect a fundamental tension in the AI industry that is unlikely to resolve soon. As AI systems become more capable, their potential military applications grow correspondingly — from intelligence analysis and logistics optimization to surveillance and autonomous weapons. The question of where to draw ethical lines is becoming the defining issue for the industry's leading researchers and engineers.
For now, the departures serve as a reminder that the people who build these systems still hold significant leverage. In an industry where talent is scarce and highly mobile, principled resignations carry real cost — both for the companies that lose key leaders and for the broader trajectory of AI development.
This article is based on reporting by The Robot Report. Read the original article.




