A competitive industry finds common cause
OpenAI, Anthropic, and Google have reportedly begun working together to counter unauthorized copying of their AI models, a sign that one of the industry’s most intense competitive pressures is now being treated as a shared security problem. According to the supplied source text, the companies are exchanging information through the Frontier Model Forum, an organization founded in 2023.
The immediate concern is so-called adversarial distillation. In distillation, outputs from a stronger existing model are used to train a cheaper copycat system. The candidate material says this practice has evolved from an early proof point into a significant commercial problem for U.S. AI companies. It cites Bloomberg for the claim that American authorities estimate adversarial distillation is costing U.S. AI labs billions of dollars in lost revenue each year.
That shift matters because it changes the framing around model competition. Instead of treating imitation solely as a market reality, leading labs appear to be positioning some forms of copying as an attack pattern that should be monitored, documented, and mitigated collectively. The source text compares the arrangement to the cybersecurity industry, where companies routinely share attack data even while competing in the same market.
Why distillation is now central to the AI business model
Distillation is not a new technical idea. The candidate source text points to Stanford’s Alpaca as one of the early demonstrations that outputs from an advanced model could be used to help create a cheaper alternative. What has changed is the scale of economic incentive. Frontier AI systems require vast spending on compute, research talent, and infrastructure. If competitors can cheaply approximate performance by harvesting outputs, the return profile on those investments changes dramatically.
That is why the issue now extends well beyond academic debate. Labs building large models are trying to defend both technical advantage and revenue. The supplied source text says OpenAI warned Congress in February that Deepseek was using increasingly sophisticated methods to extract data from U.S. models. It also says Anthropic identified Deepseek, Moonshot, and Minimax as actors involved in the practice.
Whether this cooperation leads to broader enforcement is still unclear from the candidate material. But the coordination itself is notable. For companies that normally battle over benchmarks, customers, and talent, information sharing suggests they see model extraction as a category-level risk rather than a routine competitive nuisance.
A geopolitical layer to AI model defense
The source text specifically frames the concern around copying by Chinese competitors, giving the story a geopolitical dimension as well as a commercial one. AI competition between the United States and China already spans chips, cloud infrastructure, export controls, and access to top engineering talent. Unauthorized model copying adds another layer: preserving the value of frontier systems after they are deployed.
This matters because a model can be protected during training but become vulnerable once customers and developers start querying it at scale. If defensive measures are weak, the public interface of a model can become a channel for extracting enough outputs to recreate parts of its behavior. That would make deployment itself a security boundary, not just a product milestone.
The reported collaboration also hints at a broader transition in the AI sector. Frontier labs increasingly resemble operators of critical digital infrastructure rather than ordinary software vendors. They are managing large models whose misuse, replication, or degradation has strategic consequences for business, policy, and national competitiveness.
What comes next
The source material does not describe specific technical countermeasures, but the fact pattern suggests the labs are moving toward more structured detection and response. That could include monitoring usage patterns, comparing suspicious outputs, or sharing signatures of known extraction attempts. The cybersecurity analogy in the candidate text implies a more systematic exchange of threat intelligence may be emerging.
For policymakers, this story sharpens a difficult question: where to draw the line between legitimate model evaluation, ordinary competitive pressure, and improper extraction. For AI companies, the problem is more immediate. If the economics of frontier model development are to hold, labs need ways to defend their systems after launch, not just before release.
For Developments Today, the broader signal is that AI competition is becoming institutionalized at the security layer. OpenAI, Anthropic, and Google are still rivals. But on this issue, they appear to agree that the cost of going it alone may now be too high.
- Companies named in the reported collaboration: OpenAI, Anthropic, and Google
- Forum named in the source text: Frontier Model Forum
- Core issue: adversarial distillation of existing AI model outputs
This article is based on reporting by The Decoder. Read the original article.



