A Cold War Tool for a New Technology Era
The relationship between the U.S. Department of Defense and one of the world's leading artificial intelligence companies has reached a breaking point. Defense Secretary Pete Hegseth has issued a stark ultimatum to Anthropic: agree to allow unrestricted military use of its AI technology by Friday, or face expulsion from the Pentagon's supply chain entirely.
The threat came during a tense meeting in Washington on Tuesday, where Hegseth summoned Anthropic CEO Dario Amodei for what sources described as a confrontational discussion about the company's refusal to grant the military unfettered access to its AI models for classified operations. Among the most contentious demands were provisions for domestic surveillance capabilities and lethal autonomous missions without direct human oversight.
Perhaps most striking was Hegseth's invocation of the Defense Production Act, a Cold War-era statute that grants the president sweeping authority to compel domestic industries to serve national defense priorities. Originally designed to ensure factories could pivot to wartime production, the law has never been used to force an AI company to hand over its technology — making this threat unprecedented in the history of American technology policy.
Anthropic's Safety-First Stance Under Pressure
Anthropic has long distinguished itself in the AI industry through its emphasis on safety research and responsible deployment. The company, founded by former OpenAI researchers Dario and Daniela Amodei, has built its brand around the concept of constitutional AI — systems designed with built-in ethical guardrails meant to prevent misuse.
That safety-first philosophy has now placed the company on a collision course with the Pentagon's expanding appetite for AI integration across military operations. While Anthropic has not opposed all defense contracts, it has drawn firm lines around certain applications, particularly those involving autonomous lethal force without meaningful human control and mass surveillance programs targeting domestic populations.
The company's position reflects a broader debate within the AI industry about where to draw ethical boundaries. Other major AI firms, including OpenAI and Google, have also grappled with military contracts, though most have been more willing to negotiate terms of engagement with defense agencies. Anthropic's harder line has made it a lightning rod in Washington's increasingly aggressive push to weaponize artificial intelligence.
The Defense Production Act: An Unusual Weapon
The Defense Production Act was signed into law in 1950, during the early stages of the Korean War. Its original purpose was straightforward: ensure that American industry could rapidly shift production to support military needs. Over the decades, it has been invoked for everything from semiconductor manufacturing to pandemic medical supply chains.
But applying the DPA to compel an AI company to grant access to its models represents a fundamentally different kind of intervention. Unlike physical goods, AI models are intellectual property whose capabilities and risks are deeply intertwined. Forcing a company to remove safety guardrails from its technology raises questions that go far beyond traditional procurement disputes.
Legal experts have noted that such a use of the DPA would likely face immediate court challenges. The law was designed for production and supply chain priorities, not for overriding a company's internal safety policies on how its technology is deployed. Any attempt to invoke it could set a precedent with far-reaching implications for the entire technology sector.
Industry Reactions and Broader Implications
The standoff has sent shockwaves through Silicon Valley. Other AI companies are watching closely, aware that the outcome could establish new norms for how the government interacts with the private AI sector. Several industry leaders have privately expressed concern that acquiescing to the Pentagon's demands could undermine the safety research that many consider essential to preventing catastrophic AI misuse.
Congressional responses have been mixed. Hawks on the Armed Services committees have backed Hegseth's position, arguing that national security must take precedence over corporate safety preferences. Others, particularly members of the Senate Commerce Committee, have cautioned that strong-arming AI companies could drive talent and innovation overseas, ultimately weakening America's competitive position.
The European Union has also taken notice. EU officials have pointed to the confrontation as evidence supporting their own more regulatory approach to AI governance, with one senior diplomat noting that the episode underscores the risks of leaving AI safety decisions to the whims of political appointees.
What Happens Next
The Friday deadline looms large. If Anthropic refuses to comply, Hegseth could follow through on his threat to remove the company from defense procurement channels, cutting off a significant revenue stream and sending a message to other AI firms. The Defense Production Act option remains on the table but would represent a far more dramatic escalation with uncertain legal outcomes.
For Anthropic, the calculus is existential. Capitulating could undermine the safety principles that define its corporate identity and erode trust with employees who joined specifically because of the company's ethical commitments. Resisting could cost the company not just government contracts but also political goodwill at a time when AI regulation is being actively shaped in Washington.
Whatever the outcome, the confrontation has made one thing clear: the honeymoon between AI companies and government is over. The era of polite collaboration on AI policy is giving way to hardball negotiations where the stakes are measured not in quarterly earnings but in the fundamental questions of how the most powerful technology in human history will be governed.
This article is based on reporting by Ars Technica. Read the original article.




