A Fault Line in the AI Industry

OpenAI announced on February 28 that it had reached a deal allowing the U.S. military to use its artificial intelligence technologies in classified operations. The agreement came after weeks of tense negotiations that saw Anthropic — OpenAI's chief rival and the maker of Claude — refuse the Pentagon's terms and walk away from the table entirely.

The diverging paths of the two companies represent the most consequential split in the AI industry's relationship with military power. Where OpenAI found compromise, Anthropic found red lines it would not cross, setting the stage for a confrontation that extends well beyond business strategy into fundamental questions about how artificial intelligence should be governed.

What Anthropic Refused

The core dispute centered on the Pentagon's desire to use AI systems for analyzing bulk data collected from American citizens and for applications that could lead to lethal autonomous weapons. Anthropic maintained that these uses violated its core principles and posed unacceptable risks.

Reports indicate the Pentagon wanted broad latitude to deploy Claude for essentially any legal military purpose, a scope that Anthropic viewed as dangerously vague. The company's leadership argued that permitting mass surveillance applications and refusing to draw firm lines around autonomous lethal systems would undermine the safety commitments that define Anthropic's identity.

Negotiations between Anthropic and the Department of Defense reportedly made virtually no progress on these fundamental issues. The gap between what the Pentagon demanded and what Anthropic would accept proved unbridgeable, with neither side willing to compromise on what each viewed as non-negotiable principles.