A restricted AI system appears to have reached one of the most sensitive users in government
The National Security Agency is reportedly using Mythos Preview, Anthropic’s newly announced AI model for cybersecurity work, even though the company has kept the system out of public release. According to the supplied report, Anthropic said earlier this month that Mythos was too capable of offensive cyberattacks to be broadly released, and access was limited to roughly 40 organizations. The NSA appears to be one of the undisclosed recipients.
If accurate, the arrangement captures the complicated phase now unfolding in frontier AI policy. Governments want advanced models for defensive and operational tasks, especially in cybersecurity. At the same time, those same institutions are confronting the risks that come with deploying tools that can also be used for offensive purposes. The reported NSA use of Mythos brings that tension into unusually sharp focus.
What the report says the NSA is doing with Mythos
The article says the NSA is using Mythos primarily to scan environments for exploitable vulnerabilities. That is a narrower and more concrete description than the broad marketing language that often surrounds AI deployments. It suggests a practical use case: pointing a powerful model at digital infrastructure to surface weaknesses before adversaries do.
That matters because vulnerability discovery sits at the boundary between defense and offense. A system that can help defenders identify flaws can, by nature, expose paths an attacker might exploit. Anthropic’s own stance, as described in the report, appears to recognize that dual-use problem. The company announced Mythos as a frontier model built for cybersecurity tasks, but withheld it from public release because of concern over offensive capability.
That framing makes the NSA’s reported access especially notable. Rather than a consumer launch or an enterprise beta, this appears to be a controlled deployment to highly selected organizations. The U.K.’s AI Security Institute has also confirmed access, according to the supplied text. Together, those details point to a pattern in which particularly capable systems may be shared first with state or state-adjacent institutions instead of entering the open commercial market.
A contradiction inside the U.S. government
The report’s most consequential point is not simply that the NSA may be using Anthropic technology. It is that the deployment is said to be happening while Anthropic remains in conflict with the Defense Department. Weeks earlier, according to the supplied text, the Pentagon labeled Anthropic a “supply-chain risk” after the company refused to allow unrestricted access to a model’s full capabilities.
That creates a striking split-screen view of the federal government. One part of the national security apparatus is reportedly drawing value from Anthropic’s restricted cyber model. Another has treated the company as a risk in a broader procurement and control dispute. For anyone tracking the federal AI market, that is a meaningful signal. Washington is not acting as a single buyer with a single position. Agencies appear to be weighing advanced AI systems differently based on mission, access demands, and institutional priorities.
The report adds another politically sensitive detail: the Pentagon dispute originated when Anthropic refused to make Claude available for mass domestic surveillance and autonomous weapons development. Those are among the hardest lines in the current debate over military and intelligence uses of generative AI. Even without additional confirmation beyond the supplied report, the implication is clear: access negotiations are no longer just about technical integration or pricing. They are becoming arguments about where providers will and will not let their models be used.
Why this matters beyond one company or one model
The reported NSA use of Mythos signals how the next stage of the AI industry may develop. The biggest strategic question is no longer whether governments will use frontier models. It is how access will be segmented, who gets privileged use, and what conditions will govern those deployments.
Anthropic’s approach, as described here, appears to rely on selective distribution rather than broad release. That would give the company more control over how a highly capable system is tested and who can operate it. It may also reduce public scrutiny, because the most consequential deployments happen inside a small circle of approved institutions.
For the U.S. government, the episode underscores a structural problem. Agencies want advanced AI for cybersecurity and national security work, but model providers may insist on usage limits that do not align with every agency’s ambitions. That gap could produce a patchwork system in which some agencies secure access through narrower, mission-specific arrangements, while others remain locked in disputes over control, transparency, or permissible use.
It also suggests that cyber defense may become one of the first domains where highly restricted frontier AI models gain real operational traction. The use case described in the report is concrete, urgent, and legible to policymakers: scanning for exploitable vulnerabilities is easier to justify than broader ambitions around autonomy or generalized intelligence support. That makes cybersecurity an attractive proving ground for systems that vendors consider too risky to release widely.
What remains unconfirmed
Important parts of the story still sit in the realm of reported, not publicly documented, government policy. TechCrunch says Axios reported the NSA use. TechCrunch also says it reached out to the NSA for comment, while Anthropic declined to comment. That means there is, at least from the supplied material, no direct official confirmation from either organization about the arrangement, its terms, or its scope.
There are also unanswered questions about oversight. The supplied text does not establish how Mythos access is audited, what technical safeguards govern its use, or whether the NSA’s use is restricted to testing, internal analysis, or operational deployment. Those unknowns matter because they determine whether this is an exploratory pilot or part of a more durable procurement path.
The broader signal
Even with those caveats, this is not a trivial industry tidbit. It is an indicator of how fast the line is moving between frontier AI research and national-security application. A model described as too capable for public release may already be in the hands of one of the most sophisticated intelligence agencies in the world. At the same time, the same company is reportedly resisting other forms of government access on civil-liberties and weapons-related grounds.
That is the real story. The AI policy debate is no longer just about whether advanced systems are powerful or risky. It is about who gets access first, who gets denied, and which institutions can shape the rules through procurement pressure. Mythos, if the report holds, is an early example of that new order taking shape.
Key points
- TechCrunch reports that the NSA is said to be using Anthropic’s restricted Mythos Preview model.
- The supplied report says Mythos was limited to about 40 organizations because Anthropic considered it too capable for public release.
- The NSA is reportedly using the model mainly to scan environments for exploitable vulnerabilities.
- The deployment appears to sit alongside an ongoing Pentagon dispute with Anthropic over access and permitted use.
This article is based on reporting by TechCrunch. Read the original article.
Originally published on techcrunch.com








