A Fault Line in the AI Industry
OpenAI announced on February 28 that it had reached a deal allowing the U.S. military to use its artificial intelligence technologies in classified operations. The agreement came after weeks of tense negotiations that saw Anthropic — OpenAI's chief rival and the maker of Claude — refuse the Pentagon's terms and walk away from the table entirely.
The diverging paths of the two companies represent the most consequential split in the AI industry's relationship with military power. Where OpenAI found compromise, Anthropic found red lines it would not cross, setting the stage for a confrontation that extends well beyond business strategy into fundamental questions about how artificial intelligence should be governed.
What Anthropic Refused
The core dispute centered on the Pentagon's desire to use AI systems for analyzing bulk data collected from American citizens and for applications that could lead to lethal autonomous weapons. Anthropic maintained that these uses violated its core principles and posed unacceptable risks.
Reports indicate the Pentagon wanted broad latitude to deploy Claude for essentially any legal military purpose, a scope that Anthropic viewed as dangerously vague. The company's leadership argued that permitting mass surveillance applications and refusing to draw firm lines around autonomous lethal systems would undermine the safety commitments that define Anthropic's identity.
Negotiations between Anthropic and the Department of Defense reportedly made virtually no progress on these fundamental issues. The gap between what the Pentagon demanded and what Anthropic would accept proved unbridgeable, with neither side willing to compromise on what each viewed as non-negotiable principles.
OpenAI Steps In
OpenAI's willingness to reach an accommodation where Anthropic would not reflects its evolving stance on military applications. The company, which began as a nonprofit research lab committed to beneficial AI, has progressively expanded its commercial ambitions and its willingness to engage with government and military customers.
The specific terms of OpenAI's deal with the Pentagon have not been fully disclosed, but the agreement reportedly permits use of the company's technologies in classified settings — a significant expansion beyond the limited government applications OpenAI had previously supported. The deal positions OpenAI as the primary AI provider for a military establishment eager to integrate large language models into intelligence analysis, operational planning, and other sensitive functions.
Anthropic Pushes Back Hard
Far from quietly accepting its exclusion, Anthropic has vowed to legally challenge what it describes as a retaliatory designation as a security risk — a label that could effectively bar the company from future government contracts. The company's leadership views this designation as punishment for refusing to acquiesce to military demands it considers ethically unacceptable.
The dispute has generated unexpected public sympathy for Anthropic. Downloads of its Claude application surged following reports of the Pentagon standoff, with the app climbing to the number two position in the App Store. The consumer response suggests that a segment of the public values AI companies that draw ethical boundaries, even at significant commercial cost.
The Broader Stakes
The OpenAI-Anthropic split on military AI use illuminates a tension that has simmered since the earliest days of the modern AI boom. Technology companies developing the most powerful AI systems in history must decide what constraints, if any, they will place on how those systems are used — and by whom.
This decision carries weight because the capabilities of large language models extend far beyond simple text generation. These systems can analyze vast datasets, identify patterns in intelligence reports, assist in targeting decisions, and potentially operate with decreasing human oversight. The question of who controls these capabilities, and under what rules, is arguably the most important governance challenge the AI industry faces.
The Pentagon's position is straightforward: national security demands access to the best available technology, and companies that develop transformative AI capabilities have an obligation — or at least a strong incentive — to support defense applications. Military leaders argue that adversaries including China are integrating AI into their military systems without ethical hand-wringing, and that American restraint amounts to unilateral disarmament.
What Happens Now
The immediate consequences are clear. OpenAI gains a lucrative and strategically significant customer, deepening its relationship with the U.S. government. Anthropic faces potential exclusion from government contracts and must fight a legal battle to preserve its standing, all while making the case that its ethical stance is commercially viable in the long run.
The longer-term implications are less certain. If Anthropic's principled refusal resonates with consumers, enterprise customers, and allied governments that share its concerns about military AI governance, the company's stand could prove strategically sound. If the Pentagon's designation effectively brands the company as unreliable, Anthropic may find itself increasingly marginalized in the most consequential market for AI development.
For the AI industry as a whole, the episode establishes that the relationship between frontier AI companies and military power is not theoretical — it is immediate, consequential, and deeply divisive.
This article is based on reporting by MIT Technology Review. Read the original article.




