The Chinese government is tightening its grip on artificial intelligence. The Cyberspace Administration of China (CAC), the country’s internet regulator, has mandated that all content generated by AI must be clearly labeled, informing the audience of its origin. This new policy, which takes effect on September 1, 2025, aims to increase transparency and curb the spread of misinformation.
The CAC recently published a detailed FAQ (or “Measures for the Identification of Artificial Intelligence Generated and Synthetic Content”) outlining the specifics of the regulation. Reports of this policy change first surfaced last September with the release of the CAC’s draft plans.
The new rules apply to all forms of digital content, including text, images, videos, audio, and even virtual reality scenes. The regulation not only targets AI developers but also mandates that app stores ensure the apps they host comply with the labeling rule. AI service providers, like large language models (LLMs), are required to embed visible or audible labels, as well as metadata that identifies the content as AI-generated.
While users can request unlabeled AI-generated content for certain purposes, such as addressing “social concerns and industrial needs,” the generating app must remind the user of the labeling requirement. The app must also log the information to aid in traceability.
The CAC’s directives also prohibit the malicious removal, modification, forgery, or concealment of AI-generated content labels. This includes creating or distributing tools designed to circumvent these labeling requirements. Adding AI identifiers to human-created content is also forbidden.
The primary goal of this legislation, according to the CAC, is to combat disinformation and prevent confusion among internet users. At present, the specific penalties for violations have not been announced, but the threat of legal action from the Chinese government always looms.
China’s move mirrors a global trend towards regulating AI content. The European Union enacted its Artificial Intelligence Act in 2024, attempting to set similar standards. Analysts anticipate mixed reactions to the CAC’s new rules, given the agency’s history of internet control, including its administration of the Great Firewall of China. However, the measures align with global efforts to reduce the spread of misinformation by differentiating AI-generated content from original human work. By clearly labeling AI-created media, the public can more easily distinguish between authentic events and machine-generated content.