Back in April, the Cyberspace Administration of China (the CAC) published the draft Measures on Managing Generative AI Services for public consultation (the Draft Measures). On 10 July 2023, the CAC and six other PRC authorities jointly issued the Interim Measures on Managing Generative AI Services (the Interim Measures). The Interim Measures will come into effect on 15 August 2023.
The Interim Measures aim to establish a comprehensive regulatory framework for services using generative AI technology in the PRC. By definition, the generative AI technology here refers to “models and relevant technology that are capable of generating text, images, audio and videos”.
What are the key takeaways from the Interim Measures?
It is apparent that the Interim Measures reflect the PRC government’s long-standing dual approach of fostering innovation while safeguarding national security and social order. Below we highlight a few important takeaways from the Interim Measures.
- Subject of the Interim Measures. The CAC limited the scope of the Interim Measures to apply only to those providing generative AI services to the public within the territory of the PRC. The Interim Measures explicitly indicate that they do not apply to the research and internal use of generative AI technology by industry groups, enterprises, and education and research organizations, nor do they apply to those who provide generative AI services outside the territory of the PRC.
- Regulation of contents. The Interim Measures stipulate that AI-generated content must not contain any content relating to the subversion of state power, the overthrowing of the socialist system, incitement to split the country, endangering national security and interests, harming the nation’s image, or the promotion of terrorism, extremism, ethnic hatred and discrimination, violence or pornography, as well as fake and harmful information.
- Governance on the development process. The Interim Measures require providers of generative AI services (Service Providers) to obtain their training data and model from legitimate sources without infringing the intellectual property rights of others and, where personal information is involved, to obtain the consent of the subject of the personal information in compliance with applicable laws and administrative regulations. It is also mandated that Service Providers institute effective means to improve the authenticity and accuracy of training data, as well as the accuracy of data labelling. Further, the Interim Measures require effective means to prevent discrimination and biases throughout the research, development and application of the generative AI services. Compared to the Draft Measures, which required Service Providers to “be capable of ensuring” the accuracy of the data and the content generated to be accurate (i.e., no hallucination), the language in the Interim Measures implies an acknowledgment of the current limitations of generative AI services and the difficulties that Service Providers face in ‘ensuring’ the accuracy of the output generated.
- Paramount need for user protection. The Interim Measures require Service Providers to fulfill their confidentiality obligations regarding information inputted by users and users’ usage records in accordance with PRC law, and also prohibit Service Providers from collecting “unnecessary personal information” and storing any inputs and logs for which the user can be identified. Service Providers are also required to lawfully and promptly handle users’ requests for personal information, as well as implement measures to prevent excessive reliance on or addiction to the generative AI services among minors.
- Requirements for transparency. The Interim Measures mandate Service Providers to improve the transparency of their services. This general rule breaks down into four disclosure and reporting requirements:
(a) Service Provider identifies illegal content generated by its services, it is required to suspend the generation, rectify the issue (including through improving the model), and report it to relevant authorities;
(b) when a Service Provider identifies a user conducting illegal activities through the services, the Service Provider is obliged to provide warnings, discontinue services, store relevant records, and report to relevant authorities;
(c) when requested by authorities, Service Providers must cooperate in accordance with laws and provide an explanation on the sources, scales and types of the training data, data labelling rules, and the principles and mechanisms of the algorithms; and
(d) for services having the properties of “public opinions” or “the capacity for social mobilisation”, a security assessment must be conducted by Security Providers, as well as certain algorithm filings, alterations and cancellation procedures in accordance with the Regulations on the Management of Algorithmic Recommendations in Internet Information Services.
Where to from here?
Compared to the Draft Measures, the Interim Measures appear more practical, balanced and encouraging towards generative AI service providers and developers, and demonstrate the PRC’s acknowledgment of the current limitations of generative AI services. In the newly added articles, the CAC also reveals its intention to further support and encourage the development of generative AI technologies. Yet, with the expected rapid development of AI technology, further legislation and regulations are likely on the horizon. The Interim Measures hint at this, as they indicate that relevant authorities will seek to further regulate generative AI services through classified and graded rules. While no specifics are provided in the Interim Measures, service providers and developers of generative AI services could possibly refer to the European Union’s Artificial Intelligence Act (passed by majority on 14 June 2023) for what a classified and graded regulatory framework on AI technology may look like. Given the significant investment in generative AI developments, we expect that activities in mainland China will continue to flourish and evolve within the emerging regulatory frameworks.
More information is available in Artificial Intelligence Practice.