EU Commission proposed April 2021 first-ever legal framework for AI. But some Asian countries are following suit, not only to lead in the development of AI systems but also to create a positive environment for the use of AI. Leading the way is China. Will a global set of rules from China soon follow?
Artificial intelligence is becoming a critical competitive factor. Economic markets are increasingly being led by companies where artificial intelligence (AI) is calling the shots. But the race for competitive advantage is not just the domain of companies and organizations. Countries are also vying with each other for AI supremacy to strengthen their industries, protect national security, or solve societal challenges.
In addition to the United States, the world leaders in AI adoption, research, and development include Asian countries such as China, Singapore, and South Korea. Asia-Pacific is also expected to become an (if not the) major market for AI initiatives in the coming years, as most countries in the region have launched or are expected to launch their own AI strategies in the coming years.
However, there is a widespread public perception that the EU is the technological guardian of the world. Asia represents the Wild West, where data rights and privacy of individuals do not matter. Is this true?
AI regulation: Where does the EU stand today?
That’s right; the EU Commission has proposed April 2021 the first-ever regulatory framework for AI (“Artificial Intelligence Act“), which addresses the risks of AI and could enable Europe to play a leading role globally.
The proposal focuses on the risks posed by AI, dividing AI applications into four risk categories (from minimal risk to unacceptable risk). For example, companies will be required to register standalone AI systems with high risks, such as remote biometric recognition systems, in an EU database.
However, significant improvements are needed for the EU proposal to be effective. For example, requirements for high-risk applications are not technically feasible, making artificial intelligence in Europe impossible with this AI regulation. If an agreement is reached, AI Regulation could become serious for companies in 2024. Failure to comply could result in potential fines of between 2% and 6% of a company’s annual turnover.
On the other hand, it is not true that there are no serious discussions on AI legislation in Asia’s hotbed of AI research – and development.
Regulating Asia’s AI ecosystems
South Korea is one of the countries best prepared for AI. As early as 2019, South Korea developed an AI strategy, and since 2020, with the Digital New Deal, has been accelerating digital transformation through a solid digital ecosystem for data, an AI hub, and a 9 trillion won ($7 billion) grant into new tech industries for 2022. South Korea is creating a robust technical infrastructure and fostering a policy environment that enables AI. As recently as 2020, South Korea amended its three main privacy laws to promote data use. It also enacted a framework law on smart informatization to create a positive environment for AI use. This is part of Korea’s roadmap to revise laws, systems, regulations, and access policies for AI.
The Singapore government has also formulated a vision for Singapore to be a leader in AI by 2030. In addition to a national AI strategy, several measures have been taken to promote the development of a sustainable AI ecosystem. In terms of AI governance and regulation in Singapore, companies must comply with Singapore’s applicable laws on security, privacy, and fair competition when using AI technology. In January 2019, the Personal Data Protection Commission Singapore (PDPC) released the first version of a Model AI Governance Framework, which includes easy-to-implement guidelines for organizations deploying ethical AI solutions. On May 25, 2022, Singapore’s Minister for Communications and Information released a voluntary testing framework and toolkit for AI governance called A.I. Verify, which verifies the performance of an AI system based on information provided by the developer and taking into account internationally recognized ethical principles for AI.
Singapore’s approach aims to achieve a very good balance between promoting innovation and growth, ensuring responsible AI development, and maintaining Singapore’s position as a global and regional technology center.
In addition to these examples, however, one country is particularly far along. China, of all nations – which tends to be criticized in many places for handling personal data – has taken the lead in developing AI regulation beyond the proposal stage.
A global role model from China?
Over the past decade, China has built a solid foundation to support its AI economy and has made significant contributions to AI worldwide. In research, China produced about one-third of the world’s AI journal articles and citations in 2021. Also, the application of AI technology in the real economy is strongly promoted, and we are now seeing its application mainly in finance, retail, and high-tech.
But that’s not all: In March 2022, China passed an AI regulation that governs the use of algorithms in online recommendation systems by companies. The scope of application is thus much narrower than that of the European Artificial Intelligence Act, but it is already in use. In doing so, it mandates that such AI services be moral, ethical, accountable, transparent, and “spread positive energy.” Companies must inform users when an algorithm is being used to show them certain information, and there must be an opportunity to opt out of being targeted. In addition, the regulation prohibits algorithms that use personal data to offer different prices to consumers.
It is to be expected that China will continue to step on the gas in the regulatory area. As recently as September 22, 2022, Shanghai passed China’s first provincial-level law addressing AI development, the “Shanghai Regulations on Promoting the Development of the AI Industry.” The Shanghai AI Regulations also aim to pave the way for the sound and sustainable development of AI technology through grading management and “sandbox” supervision, providing sufficient space for enterprises to explore and test their technologies.
These issues will certainly model and integrate other AI regulations around the world. It is also remarkable how quickly the regulation was implemented in China, compared to the timeframe in which other countries normally adopt regulations.
Which approach will prevail?
Whether the European or Chinese approach (or a completely different approach) will be taken as the global model in the future remains to be seen. Both AI regulations take different approaches. While the Artificial Intelligence Act attempts more comprehensively to bring all AI systems under one regulatory umbrella and is far more burdensome for companies as a consequence, the Chinese approach prescribes comprehensive rules for algorithm recommendations, which could curb the influence of technology companies in particular.
For the many companies investing significant resources in AI, understanding the current and proposed regulatory framework is critical. Especially for global companies, it is not easy to follow the different standards, legal frameworks and recommendations.
For Europe, the main challenge will be to not only launch a practical AI legal framework, but to achieve a balance between fostering innovation and growth and responsible AI development in the process. If this can be achieved, Europe could become a role model in regulating AI and set a global regulatory framework in the future.