Site iconSite icon ForkLog

China tightens rules for training AI models

China tightens rules for training AI models

China has unveiled a draft framework for firms providing generative AI services. They include limits on data sources used to train models, writes Cointelegraph.

The document was developed by the National Committee for Information Security Standardisation. It includes representatives from the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology, and law enforcement agencies.

The committee recommends conducting a safety assessment of the content used to train publicly available generative AI models. Data exceeding “5% in the form of illegal and harmful information” will be blacklisted.

This category includes materials that promote terrorism, violence, undermine the socialist system, damage the country’s reputation and actions that weaken national unity and social stability.

The project requires organisations to obtain explicit consent from individuals whose personal data, including biometric information, are used for training.

The guiding principles contain exhaustive instructions to prevent infringements of intellectual property.

The development of the rules followed a month after regulators allowed tech giants such as Baidu to bring their chatbots to the general public.

In April the CAC notified companies of the need to provide safety assessments before offering AI-based products.

In July the agency published a set of guidelines regulating these services, which proved less burdensome than the initial draft.

In January, China enacted a law banning the creation of deepfakes without the subject’s consent or to the detriment of national interests.

Exit mobile version