OpenAI, Google, Anthropic Team Up Against Chinese AI Startups to Curb Model Distillation

OpenAI confirmed that it is sharing information related to such activity with peers through the Forum.

Share

OpenAI, Anthropic, and Google are coordinating efforts to detect and limit attempts at model distillation by Chinese AI firms, according to a Bloomberg report.

The collaboration is taking place through the Frontier Model Forum, an industry body founded in 2023 by OpenAI, Google, Anthropic, and Microsoft—now being used as a channel for intelligence sharing around model misuse.

Distillation is a technique used to replicate the behaviour of advanced AI systems by systematically querying them and using the outputs to train competing models.

OpenAI confirmed that it is sharing information related to such activity with peers through the Forum, added the report.

This follows a February 2026 escalation reported by Reuters, when OpenAI formally warned US lawmakers that DeepSeek was attempting to replicate American AI systems using increasingly sophisticated distillation techniques.

In a memo to the House Select Committee, the company alleged “ongoing efforts to free-ride” on the US’ AI capabilities and described attempts to bypass safeguards to extract model outputs.

“We have observed accounts associated with DeepSeek employees developing methods to circumvent OpenAI’s access restrictions and access models through obfuscated third-party routers and other ways that mask their source,” the company said in the memo cited by Reuters.

Even before that warning, the issue had already entered formal policy scrutiny. According to Reuters, a 2025 report by the US House Select Committee on China said it was “highly likely” that DeepSeek used model distillation techniques to replicate leading American AI systems, framing it as potential intellectual property extraction through API access.

However, distillation concerns date back further to January 2025, around the launch of DeepSeek’s R1 model, when US officials and industry players first raised alarms about whether the company may have benefited from distillation.

At the time, Reuters reported that policymakers and technologists were already examining whether Chinese models were “piggybacking” on US systems, with OpenAI reviewing possible misuse of its models for training.

ALSO READ: Anthropic Wins Injunction Against Pentagon in Landmark AI Safety Showdown

Staff Writer
Staff Writer
The AI & Data Insider team works with a staff of in-house writers and industry experts.

Related

Unpack More