A Cold War Tool for a New Technology Era

The relationship between the U.S. Department of Defense and one of the world's leading artificial intelligence companies has reached a breaking point. Defense Secretary Pete Hegseth has issued a stark ultimatum to Anthropic: agree to allow unrestricted military use of its AI technology by Friday, or face expulsion from the Pentagon's supply chain entirely.

The threat came during a tense meeting in Washington on Tuesday, where Hegseth summoned Anthropic CEO Dario Amodei for what sources described as a confrontational discussion about the company's refusal to grant the military unfettered access to its AI models for classified operations. Among the most contentious demands were provisions for domestic surveillance capabilities and lethal autonomous missions without direct human oversight.

Perhaps most striking was Hegseth's invocation of the Defense Production Act, a Cold War-era statute that grants the president sweeping authority to compel domestic industries to serve national defense priorities. Originally designed to ensure factories could pivot to wartime production, the law has never been used to force an AI company to hand over its technology — making this threat unprecedented in the history of American technology policy.

Anthropic's Safety-First Stance Under Pressure

Anthropic has long distinguished itself in the AI industry through its emphasis on safety research and responsible deployment. The company, founded by former OpenAI researchers Dario and Daniela Amodei, has built its brand around the concept of constitutional AI — systems designed with built-in ethical guardrails meant to prevent misuse.

That safety-first philosophy has now placed the company on a collision course with the Pentagon's expanding appetite for AI integration across military operations. While Anthropic has not opposed all defense contracts, it has drawn firm lines around certain applications, particularly those involving autonomous lethal force without meaningful human control and mass surveillance programs targeting domestic populations.

The company's position reflects a broader debate within the AI industry about where to draw ethical boundaries. Other major AI firms, including OpenAI and Google, have also grappled with military contracts, though most have been more willing to negotiate terms of engagement with defense agencies. Anthropic's harder line has made it a lightning rod in Washington's increasingly aggressive push to weaponize artificial intelligence.