Can China’s targeted AI approach hit the bullseye?

Beijing has quietly removed plans for a comprehensive legal framework to prioritize pilots and standards

Sometimes, the fastest way to govern a moving target is to stop aiming for a bullseye. China has applied this wisdom to artificial intelligence or AI. Instead, it quietly removed plans for a single comprehensive legal framework from the 2025 legislative schedule.

Beijing will now prioritize pilots and standards, as well as targeting measures by seizing the opportunity to learn from international experiences ahead of an overarching statute. Yet the removal of the comprehensive framework proposal surprised many observers. 

The delay preserves regulatory flexibility for a technology still in its early stages, but comes at the expense of relying on the existing fragmented AI rules. Still, Chinese officials and state media seem to be signaling the arrival of high-level legislation is imminent. 

In May, the state-owned newspaper, Legal Daily, argued that such legislation remains a critical part of “pushing for the healthy development of AI.”

Privacy law

The National People’s Congress republished another Legal Daily commentary in June which said that existing technology and privacy laws do not cover AI-specific risks, such as algorithmic bias and discrimination.

Observers have debated the motivation behind the quiet removal of the AI legislation. While some critics see it as a needless delay, others view it as a deliberate pause to give space for the technology to mature. 

Bu Shou, a Shanghai Jiao Tong University professor, has argued that updating existing statutes and issuing targeted rules is all that is needed to mitigate the risks arising from AI development in the country.

At the moment, China relies on existing statutes, industry standards and sector-specific measures to govern artificial intelligence use. But as Florence G’sell, of the Stanford Cyber Policy Center, has noted, government regulations tend to outperform self-regulations. 

The European Union’s has a tiered AI Act. Image: File

Such industry-led standards and internal governance programs commonly prioritize performance over risk mitigation and accountability.

Companies using AI face higher compliance costs when fragmented frameworks clash and there is no high-level statute to guide them. Problems also arise from inconsistencies between emerging AI regulations and existing statutes. 

Shanghai’s Regulation on Promoting the Development of the Artificial Intelligence Industry expands access to public data for AI development. But it is unclear whether the initiative relies on consent as indicated in the Personal Information Protection Law or PIPL.

Algorithmic transparency is another area affecting AI development in China. Some rules ask firms to explain how their systems work, all the while trade-secret and security rules limit what they can disclose. 

Compliance costs

Contradictory regulations increase the cost of doing business, especially for small- and medium-sized enterprises without large compliance teams. Without comprehensive AI legislation, these tensions will only grow. 

Official state newspaper, People’s Daily, has stressed the need to coordinate development and security in AI legislation. Coordination is the key lever for lowering compliance costs for companies deploying artificial intelligence. 

A high-level statute could provide a forum to resolve conflicts and set uniform baselines for safety testing, bias evaluation and incident reporting.

Other countries and regions offer different solutions. The European Union’s tiered AI Act offers strong safeguards and legal certainty but demands heavy compliance. That is much easier for large firms to absorb than it is for small- and medium-sized enterprises. 

Setting the legal framework for AI. Image: File

 Japan’s lighter principle-first approach sits closer to China’s pilot and standards path but with weaker leverage. South Korea’s law focuses on promotion and regulation, providing an example of how to balance innovation with safeguards.

Despite hopes that China would introduce a comprehensive legal framework to resolve the problems with the regulatory regime, for now, it has adopted an incremental approach

Regulators will keep issuing targeted measures, refine security assessments and expand pilots in AI initiatives such as healthcare and smart cities. 

Standard-setting bodies will shape technical requirements for evaluation, watermarking, data governance and cybersecurity testing. Major tech-hubs, such as Shanghai, Beijing and Shenzhen will serve as testbeds.

Public opinion

China may still release a comprehensive law in the coming years. Similar to how a 2016 fraud case accelerated the legislation of the PIPL, AI incidents resulting from unmitigated risks would expose the limitations of existing regulations.

In turn, that would stir public opinion towards a comprehensive AI law. Still, the world will watch closely as Chinese regulations are released and as testing, alignment and deployment get underway.

Ben Hu is a Technology Consultant with experience in cybersecurity risk, public affairs and privacy compliance. Adam Au is a General Counsel at a multinational corporation and Lecturer at the University of Hong Kong.

This edited article is republished from East Asia Forum under a Creative Commons license. Read the original article here.

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy of China Factor.