The latest AI conflict is about copying without taking the code

The US technology dispute with China is entering a more specific and more difficult phase. The supplied Ars Technica source says US officials are preparing to respond to what they describe as “industrial-scale” theft of American AI labs’ intellectual property, with distillation at the center of the argument. That matters because distillation sits in a gray zone between normal model interaction and strategic extraction.

Traditional intellectual-property disputes often revolve around obvious things: source code, chip designs, trade secrets, leaked documents. Distillation changes the picture. It suggests that a rival can learn from a frontier model’s outputs at scale, using those responses to train a cheaper system that captures part of the original model’s value without obtaining the model weights directly.

Why Washington is taking the issue seriously

According to the supplied source, US officials believe foreign entities based principally in China have used tens of thousands of proxy accounts and jailbreaking techniques to expose proprietary information and extract value from frontier systems. Several AI companies are also cited as having made related allegations involving cloned or copied chatbot behavior.

From Washington’s perspective, this is not just a terms-of-service problem. It is a competitiveness problem. If distillation allows rivals to compress years of expensive model development into a far cheaper imitation cycle, then leading labs lose some of the protection that scale, compute, and capital were supposed to provide.

The policy response could redefine model IP

The source notes that Congress is being pushed to treat model extraction as a form of industrial espionage and to consider stronger penalties. That is significant because current law was not written with frontier model behavior in mind. A government decision to classify large-scale distillation as something closer to espionage than misuse would redraw the legal boundary around AI systems.

That boundary is not trivial. AI models are designed to answer questions. The more capable they become, the harder it may be to distinguish legitimate use, competitive benchmarking, red-teaming, and deliberate extraction. Policymakers are now being asked to define where that line sits.

The broader geopolitical message

This dispute also reveals how AI competition is maturing. The first phase was about chips, talent, and model launches. The next phase is about control of outputs, defenses against imitation, and the enforceability of model-based intellectual property. In other words, the strategic contest is moving up the stack.

That could affect more than US-China relations. If governments start treating model extraction as a national-security issue, AI firms may receive more official threat intelligence, build stricter account controls, and lobby for laws that criminalize new forms of scraping and imitation. The result would be a tighter, more security-oriented AI industry.

A difficult problem with real consequences

The challenge is that the core technique at issue is conceptually close to learning from observation, something that has always been part of competition. The difference, US officials argue, is scale, automation, deception, and intent. When extraction is performed through massive proxy networks and systematic evasion, they are signaling that the behavior stops looking like normal market competition and starts looking like organized appropriation.

That framing may soon drive sanctions, new legislation, or tougher enforcement. Whether those measures arrive quickly or not, one point is already clear from the source material: the AI race is no longer just about building the best systems first. It is also about stopping others from reproducing their value fast enough to erase the lead.

This article is based on reporting by Ars Technica. Read the original article.

Originally published on arstechnica.com