A reported deployment cuts against the public dispute
The National Security Agency is reportedly using Anthropic’s new Mythos Preview model, according to Engadget’s account of reporting from Axios, which cited two sources said to have knowledge of the matter. If accurate, the development is notable not only because Mythos is a newly announced system, but because it arrives in the middle of a months-long confrontation between Anthropic and the US government over how the company’s models can be used in military settings.
Anthropic introduced Mythos Preview at the beginning of April and described it as a general-purpose language model with unusually strong performance on computer security tasks. That positioning matters in a national security context. A model presented as especially capable in cybersecurity can have obvious appeal to agencies focused on defensive operations, vulnerability analysis, and broader digital security work. Even without any additional detail about how the NSA is using the system, the mere fact of access suggests government interest in those capabilities.
The report is difficult to separate from the political backdrop. In February, according to the source text, President Donald Trump ordered government agencies to stop using Anthropic’s services after the company refused to change certain safeguards during military contract talks. That move created the impression of a hard break between the administration and the company. The new report complicates that picture by suggesting that at least one of the government’s most important intelligence agencies has access to the company’s latest model anyway.
Why Mythos matters
Anthropic’s public description of Mythos Preview emphasized computer security work rather than a narrow consumer feature set. That framing distinguishes it from the broader marketing language often used around new AI systems. It also helps explain why national security and defense institutions would be interested in testing it early. Security-focused AI tools can be useful for code analysis, system review, incident response support, and other technically demanding tasks where speed and pattern recognition matter.
According to the report summarized by Engadget, the NSA is one of roughly 40 organizations that Anthropic granted access to Mythos Preview. One source also said the model is being used more widely within the department. The article does not specify which department that refers to beyond the quoted source description, and it does not provide operational detail. Even so, the claim signals that Mythos may already be moving beyond a narrow pilot phase inside parts of government.
That would be significant for another reason: access can create practical momentum even while legal or policy disputes remain unresolved. In the AI market, especially for frontier systems, limited deployments often become the bridge between research positioning and institutional dependence. Once technical teams begin testing a model against real workloads, procurement and policy conversations can shift from abstract compliance concerns to concrete questions about performance and mission value.
A White House meeting raises the stakes
The timing is also important. Engadget reports that Anthropic CEO Dario Amodei met with White House chief of staff Susie Wiles and other officials on Friday to discuss Mythos. The White House later described the meeting as productive and constructive. Reuters, as cited in the source text, said Trump told reporters he had no idea about the meeting when asked. Those details point to a situation that is still fluid, with multiple centers of decision-making and no simple public line.
For Anthropic, the meeting appears to be part of a broader effort to keep channels open with government officials even while the company remains in litigation. For the administration, it suggests the door is not fully closed to a company it had previously pushed back against. For outside observers, the juxtaposition is striking: a company described as restricted in one context is simultaneously meeting senior officials and reportedly seeing one of its newest models used inside the national security apparatus.
The contradiction may be more apparent than real. Governments do not move as a single actor, and policy, procurement, legal review, and technical evaluation often proceed on separate tracks. Still, the visible mismatch between a February order to stop using Anthropic’s services and an April report of NSA usage highlights how difficult it can be to draw clear boundaries around advanced AI adoption in government.
The legal fight is far from over
The reported NSA use does not mean Anthropic’s dispute with the federal government has been settled. According to the source text, the company sued the Department of Defense in two courts in March after the Trump administration labeled Anthropic a supply chain risk. The Pentagon responded soon after. One court granted Anthropic a preliminary injunction temporarily blocking that designation, while judges in the other case denied the company’s motion to lift the label.
Those mixed outcomes underscore the unresolved status of the conflict. A preliminary injunction is not a final vindication, and a denied motion in a separate case keeps material pressure on the company. The result is a messy operating environment in which Anthropic can point to some legal traction while still confronting meaningful institutional resistance. The reported NSA access to Mythos therefore does not cancel the dispute; it makes the dispute more consequential.
It also sharpens a broader policy question. If a government views a supplier as a potential risk, what level of access remains acceptable for evaluation, pilot deployments, or mission-specific use? The source material does not answer that question, but it makes clear that the practical relationship between Anthropic and the US national security establishment is more complicated than the public feud alone would suggest.
What this says about the AI-government relationship
The deeper significance of the Mythos report may be that advanced AI providers and state institutions are now too intertwined for public disagreements to produce clean separations. Frontier model companies want major government contracts and influence over policy. Governments want access to systems that may offer strategic advantages in cybersecurity and other technical domains. That creates a relationship defined less by simple alignment than by negotiation, leverage, and selective cooperation.
Mythos Preview appears to sit directly at that intersection. It is new, security-oriented, and apparently attractive enough to have reached dozens of organizations quickly. At the same time, the company behind it is still contesting how the US government has classified and constrained it. The result is a revealing snapshot of this phase of the AI industry: adoption can advance even when trust, governance, and procurement remain unsettled.
For now, the most defensible conclusion is a narrow one. Based on the supplied reporting, Anthropic’s newest model is reportedly in use at the NSA despite an ongoing conflict between the company and parts of the US government. That is not proof of a settled partnership. It is evidence that capability, politics, and legal risk are now colliding in real time around frontier AI systems.
This article is based on reporting by Engadget. Read the original article.
Originally published on engadget.com






