A new AI security model is already colliding with policy, finance, and government power

Anthropic’s newly announced Mythos model is drawing attention well beyond the AI industry. According to Bloomberg, as summarized by TechCrunch, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell recently met with bank executives and encouraged them to use Mythos to detect vulnerabilities.

The report suggests the testing pool may already be broader than Anthropic’s initial public rollout implied. TechCrunch notes that JPMorgan Chase was named as an initial partner organization with access to the model, while Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are also reportedly testing it.

Why Mythos is unusual

Anthropic said it would limit access to Mythos for now. The reason, according to the report, is that the model is highly capable at finding security vulnerabilities despite not being trained specifically for cybersecurity. That combination has made the launch notable: Mythos is being framed not simply as another enterprise AI model, but as a system with unusually strong vulnerability-discovery capabilities.

That claim has not gone uncontested. TechCrunch notes that some observers have argued the company’s positioning may reflect hype or an aggressive enterprise sales strategy. Even so, the market response appears serious enough that banks and regulators are now discussing the implications.

A sharp contradiction inside Washington

The most striking element is the apparent disconnect inside the US government. TechCrunch reports that Anthropic is currently battling the Trump administration in court over the Department of Defense’s designation of the company as a supply-chain risk. That designation, the report says, followed failed negotiations over Anthropic’s attempt to limit how its AI systems could be used by the government.

If Treasury and the Federal Reserve are indeed urging major banks to test Mythos while the Defense Department treats Anthropic as a supply-chain concern, the company has become the center of a very public policy contradiction. One part of government appears interested in the model’s utility; another has formally raised national-security-related alarms around the vendor itself.

Why banks would care

For large financial institutions, vulnerability detection is not a niche capability. Banks operate complex technology stacks, manage sensitive data, and face constant pressure to harden systems against intrusion and failure. A model that can identify weaknesses more effectively than existing tools would be strategically important, even if access is tightly controlled.

That is what makes the reported meeting consequential. It would suggest that the discussion is no longer only about whether a frontier AI company can build a powerful security model. It is about whether regulated financial institutions should begin integrating such a model into real risk-management workflows while the policy debate around it is still unresolved.

Regulators are watching beyond the United States

The concern is not limited to Washington. TechCrunch also cites the Financial Times in reporting that UK financial regulators are discussing the risks posed by Mythos. That expands the story from a US industry-government development into a cross-border regulatory issue.

When regulators on both sides of the Atlantic start examining a single model’s potential risk profile, it usually signals a broader question than product launch excitement. In this case, the question is whether AI systems that are highly effective at surfacing vulnerabilities create more defensive advantage than offensive risk.

What the story says about the current AI moment

Mythos sits at the intersection of three tensions now defining advanced AI deployment. The first is capability versus control: a model may be useful precisely because it can do something that also needs careful restriction. The second is adoption versus governance: large institutions may want early access before policy has settled. The third is internal state inconsistency: different agencies can treat the same company as strategically valuable and strategically risky at the same time.

Because TechCrunch’s report is built around other reporting, the most careful reading is also the most accurate one: senior US officials may be encouraging banks to test Mythos, and major banks are reportedly doing so. Even at that level of caution, the development is significant. A narrowly released model has already become a test case for how governments, regulators, and systemically important firms respond when an AI tool appears unusually strong at exposing vulnerabilities.

If the reports hold, Mythos is no longer just an Anthropic product launch. It is now a live policy story about who gets access to powerful security capabilities, under what supervision, and with what level of regulatory alignment.

This article is based on reporting by TechCrunch. Read the original article.