NSA Access Raises Policy Questions

The US National Security Agency is reportedly using Anthropic’s most powerful AI model, Mythos Preview, according to The Decoder’s summary of Axios reporting that cites two sources.

The reported deployment is notable because the NSA sits under the Pentagon’s authority, while the Defense Department has separately classified Anthropic as a security risk and has tried since February to block the company as a vendor.

The situation reflects a growing tension in government AI adoption: agencies want access to advanced models, but those same systems can trigger security, procurement, surveillance, and weapons-policy concerns.

A Restricted Model

Anthropic has limited access to Mythos to about 40 organizations under an effort called Project Glasswing. The company has argued that the model’s offensive cyber capabilities are too dangerous for broad release.

That restriction places Mythos in a different category from general-purpose commercial chatbots. The model is being treated as a sensitive capability, not simply as another productivity tool.

The Decoder notes that Anthropic CEO Dario Amodei met with White House officials last week to discuss deploying Mythos across government agencies. The UK’s intelligence services also reportedly have access to the model through the country’s AI Security Institute.

Dispute Over Acceptable Use

The Pentagon has demanded that Anthropic make Claude available for all legal purposes. Anthropic refused, drawing limits around mass surveillance and autonomous weapons.

That disagreement gets to the core of the emerging AI procurement debate. A model provider may want to define red lines for how its systems are used, while government buyers may argue that legal authority should determine permitted uses.

For intelligence and defense agencies, frontier AI models may be useful for cyber analysis, language processing, information triage, and other high-volume analytical work. But the same capabilities can raise concerns when applied to surveillance, offensive cyber operations, or systems that operate with reduced human control.

Why It Matters

The reported NSA use of Mythos suggests that powerful AI systems are moving into sensitive government environments even before stable policy norms have formed around vendor restrictions, national security exclusions, and acceptable-use boundaries.

The conflict also shows that AI safety commitments are no longer only a product-design issue. They are becoming procurement terms, legal disputes, and national security questions.

If agencies adopt restricted AI models while other parts of government challenge the vendor’s role, the result may be a fragmented approach: one office treats a model as strategically necessary, while another treats the provider as a security concern.

The Mythos case will likely be watched closely because it involves several of the most consequential AI governance questions at once: who gets access to frontier models, how cyber-capable systems are controlled, and whether a private AI company can refuse certain government uses while still serving public-sector customers.

This article is based on reporting by The Decoder. Read the original article.

Originally published on the-decoder.com