The fight for “digital sovereignty” begins

On September 29, 2021, the then newly created Council on Trade and Technology (CCT) of the United States and the European Union held its first summit.

Oliver Thansan
Oliver Thansan
14 February 2024 Wednesday 09:26
13 Reads
The fight for “digital sovereignty” begins

On September 29, 2021, the then newly created Council on Trade and Technology (CCT) of the United States and the European Union held its first summit. It took place in the former industrial city of Pittsburgh (Pennsylvania) under the direction of Margrethe Vestager, vice president of the European Commission, and Antony Blinken, US secretary of state. After the meeting, the United States and the European Union declared their opposition to artificial intelligence (AI) that does not respect human rights and mentioned systems that violate them, such as social scoring systems. As specified by the CCT during the meeting, “the United States and the European Union are very concerned that some authoritarian governments are testing social scoring systems with the aim of implementing large-scale social control. These systems pose a threat to fundamental freedoms and the rule of law, including muzzling freedom of expression, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or illegal surveillance systems.”

The implicit target of criticism was China's “social credit” system, a big data system that uses a wide variety of data to provide a person's social credit score; a social credit that in turn determines the permissions that that person can enjoy in society, such as the purchase of a plane or train ticket. The CCT's criticism indicates that the United States and the European Union disagree with China's vision for how authorities should manage the use of AI and data in society.

The CCT can therefore be seen as one of the initial steps towards building an alliance around a human rights-oriented approach to the development of AI in democratic countries, in contrast to authoritarian countries such as Russia and China. . However, these different approaches can lead to technological decoupling, conceptualized as the national strategic decoupling of otherwise interconnected technologies such as 5G, hardware such as computer chips, and software such as operating systems. Historically, the advent of the web created an opportunity for the world to become interconnected and form a global digital ecosystem. However, growing mistrust between countries has led to an increase in digital sovereignty; That is, a country's ability to control its digital destiny, which can include control over the entire AI supply chain, from data to hardware and software.

A consequence of this trend towards greater digital sovereignty (and which then contributes to reinforcing it) is the growing fear of being isolated from critical digital components such as computer microprocessors and the absence of control over the international flow of citizen data. These developments threaten existing forms of interconnectivity and lead to the fragmentation of high-tech markets and, to varying degrees, a retreat toward the nation state.

To understand the extent to which we are moving towards various forms of technological decoupling, this article describes the unique positions of the European Union, the United States and China when it comes to data regulation and AI governance. It then looks at the implications of those different approaches to technological decoupling, and then the implications for specific policies around AI, such as the US Algorithmic Accountability Act, the European Union's Artificial Intelligence Act, and China's regulation of recommendation engines.

The European Union has, in many ways, been a pioneer in data regulation and AI governance. The General Data Protection Regulation (GDPR), which came into force in 2018, set a precedent. This is seen in how the legislation has inspired other laws, for example, California's Consumer Privacy law and China's Personal Information Protection law. The European Union's Artificial Intelligence Law (LIA), which could come into force in 2024, also constitutes a new and innovative regulation of risk-based AI and, together with the Digital Markets law and the Services law Digital, creates a holistic approach to how authorities try to regulate the use of AI and information technology in society.

This LIA establishes a horizontal set of rules to develop and use products, services and systems based on AI within the European Union. It is inspired by a risk-based approach that ranges from unacceptable risks (e.g. social credit scoring and the use of facial recognition technologies for real-time surveillance of public spaces) to high risks (e.g. (for example, AI systems used in recruiting and credit applications), limited risks (for example, a chatbot) and little or no risks (for example, video games or AI-based spam filters). While AI systems that pose unacceptable risks are banned outright, high-risk systems will undergo compliance assessments, including independent audits and new forms of monitoring and control. Limited risk systems will be subject to transparency obligations, such as providing information to the user when they interact with a chatbot. On the other hand, low or no risk systems will not be affected by the LIA.

The Digital Markets Law (DML) of the European Union tries, among other things, to guarantee that digital platforms that have the functions of “gatekeepers” and have great power over the access and control of large swathes of consumer data do not exploit their monopolies over that data to create unequal market conditions. The implicit objective is to increase (European) innovation, growth and competitiveness.

Similarly, the Community Digital Services Law (DLS) aims to give consumers more control over what they see online. That means, for example, better information about why specific content is recommended through recommendation engines and the ability to opt out of recommendation-based profiling. The new rules aim to protect users from illicit content and curb harmful content, such as political or health-based misinformation. In fact, new responsibilities are being assigned to large platforms and search engines to engage in some forms of content moderation. This means that gatekeeper platforms are considered responsible for mitigating risks such as disinformation or electoral manipulation in accordance with restrictions on freedom of expression, and that they are subject to independent audits.

Objectives of European laws

The objective of these new laws is not only to guarantee respect for the rights of EU citizens in the digital space, but also to ensure that European companies have more possibilities to compete with large American technology companies. One way to do this is to require compatibility requirements between digital products and services. These compatibility demands have already forced Apple to change its charger standard starting in 2024 and could also require greater interoperability between messaging services such as Apple's iMessage, Meta's WhatsApp, Facebook Messenger, Google Chat and Microsoft Teams. While greater interoperability may increase the vulnerability and complexity of security issues, the introduction of such changes will undoubtedly make it more difficult for companies to secure market share and continue their network-based forms of dominance.

At the same time, the European Union is trying to strengthen ties with American technology companies by opening an office in the heart of Silicon Valley headed by Gerard de Graaf, director of the Digital Economy at the European Commission, who is expected to establish closer contact with companies like Apple, Google and Meta. The European Union's strategic move will also serve as a mechanism to ensure that American technology companies comply with new European regulations, such as the LIA, LMD and LSD.

In relation to semiconductors, the President of the European Commission Ursula von der Leyen announced in February 2022 the European Chip law, which aims to place the European Union at the forefront of semiconductor manufacturing. By 2030, Europe's share of global semiconductor production is expected to more than double from 9% to 20%. The European Chip law is a response to the US Science and Chip law and Chinese ambitions to achieve digital sovereignty through the development of semiconductors. Semiconductors are the cornerstone of all computers and are therefore essential for the development of AI. Strategic policies such as this European law suggest that control over the IT part of the AI ​​value chain and the politicization of high-tech development will continue to gain importance in the coming years.

The largest technology companies (Apple, Amazon, Google, Microsoft, Alibaba, Baidu, Tencent and others) are mainly located in the United States and China, not in Europe. To address this imbalance, the European Union intends to establish the regulatory agenda for public governance of the digital space. The new regulations aim to ensure that international companies comply with European standards, while reinforcing the community's determination to obtain digital sovereignty.

The US approach to AI is characterized by the idea that companies, in general, should continue to control industrial development and related governance criteria. Until now, the US federal government has taken a hands-off approach to AI governance in order to create an environment free of burdensome regulations. The Government has repeatedly stated that “onerous” rules and state regulations are often seen as “barriers to innovation” that need to be reduced; for example, in areas such as autonomous vehicles.

The United States also takes a different approach than those followed by the European Union and China in the area of ​​data regulation. It has not yet developed any national policy on data protection, as is the case in the European Union, where in 2018 the General Data Protection Regulation (GDPR) introduced a harmonized set of rules throughout the community. In contrast, only five of the fifty American states (California, Colorado, Connecticut, Utah, and Virginia) have passed comprehensive data legislation. Consequently, the California Consumer Privacy Act (CCLC), in effect since 2020, has become to some extent the de facto data regulation in the United States. The GDPR has in many ways served as a model for the LPCC, which requires companies to grant consumers greater privacy rights, including the right to access and delete any personal data, as well as the right to prevent products from being sold. data and not suffer discrimination online.

Article 230 of the Communications Decency Law exempts platforms from any responsibility for published content. According to current law, the responsibility for the content falls on the users who publish it. Partly because of this emphasis on users rather than platforms, in the United States there is little oversight of the recommendation engines that rank, organize, and determine the visibility of information on search engines and social media platforms.

However, content moderation is a thorny issue. On the one hand, there are arguments in favor of platforms moderating content to avoid excessively discriminatory and harmful online behavior. On the other hand, some states such as Texas and Florida, among others, are passing laws that prohibit technology companies from censoring users and that aim to protect citizens' right to freedom of expression. The counterargument put forward by the platforms is that their decisions related to content moderation, as well as their use of recommendation engines, constitute a form of expression that should be protected by the First Amendment, which defends American citizens and companies from restrictions. governments to freedom of expression.

Industrial policy initiatives

While the United States takes a laissez faire approach to AI regulation that tends to fragment at the state level, new industrial policy initiatives are explicitly aimed at strengthening certain aspects of the AI ​​supply chain. An example is the Science and Chips law, in which Democrats and Republicans have joined together to create new incentives for the production of semiconductors on US soil. Based on the idea of ​​digital sovereignty, the law represents a change in US industrial policy with the aim of responding to renewed concerns about maintaining US technological leadership in the face of growing competition from China.

When it comes to using AI in the public sector, the United States has experienced significant opposition from civil society; especially, the use by security forces of facial recognition technologies (TRF), a measure that has been opposed by the American Civil Liberties Union (ACLU), for example. Again, the American approach has been piecemeal. Several cities (including Boston, Minneapolis, San Francisco, Oakland, and Portland) have banned public agencies, including police, from using TRF. "It does not work. African Americans are 5 to 10 times more likely to be misidentified,” said Alameda Councilman John Knox White, who helped ban facial recognition in Oakland in 2019.

In the United States, a March 2021 report from the National Security Commission on Artificial Intelligence (NSCAI) defined the “AI race” (between China and the United States) as a values-based competition in which China must be seen. as a direct competitor. The report went further and recommended the creation of “critical points” that would limit Chinese access to American semiconductors with the aim of slowing down the progress of technological development in some areas.

Some of those “hot spots” were seen in August 2022, when the US Department of Commerce banned Nvidia from selling its A100, A100X and H100 graphics processing units (GPUs) to customers in China in a move aimed at slowing Chinese progress. in the development of semiconductors and prevent advanced chips from being used for military applications. The Commerce Department justified the measure by saying it was intended to “keep advanced technologies out of the wrong hands”; Nvidia, for its part, has indicated that the measure will have serious consequences on its global sales of semiconductors.

However, over the years, many Chinese researchers have contributed to important advances in AI-related research in the United States. Some US companies such as Beijing-based Microsoft Research Asia (MSRA) have also played a crucial role in training Chinese AI talent. Several former MSRA researchers have gone on to lead China's technological development at leading companies such as Baidu. In a context of growing distrust between the United States and China, these forms of cooperation are suffering, leading to a rethinking of existing ties in areas of technological collaboration.

Consequences for the future

In the long term, the current technological decoupling could contribute to a bifurcation of digital ecosystems. Arguably, the Entity List prepared by the Bureau of Industry and Security (BIS) contributes to this evolution since it consists of a blacklist of entities that cannot do business with US companies. When it comes to software, that evolution is already happening. Google, for example, stopped providing access to its Android operating system to Huawei after the company was placed on the Entity List. The inclusion caused sales of Huawei phones to plummet in international markets due to the sudden lack of access to the Android operating system and the app store, harming interoperability between the hardware and apps and services. This decision has led Huawei to develop its own operating system, HarmonyOS, which it uses in all its products.

Regarding regulation related to AI, the United States Algorithmic Accountability law was reintroduced in 2022, but has not been approved in either the Senate or the House of Representatives, where it had already been presented in 2019. If passed, the law would require companies that develop, sell and use automated systems to comply with new rules related to transparency and when and how AI systems are used. In the absence of national legislation, some states and cities have begun to implement their own regulations, such as New York City's Automated Employment Decision Tools law. The law requires any automated hiring system used after January 1, 2023 to undergo a bias audit consisting of an impartial evaluation by an independent auditor that includes testing to evaluate a possible disparity of impact on some groups.

China's approach to AI legislation is evolving rapidly and relies heavily on central government guidance. In 2017, the implementation of the national AI strategy was a crucial step for the country to move from a lax governance regime to establishing stricter enforcement mechanisms through data and algorithm oversight. In 2021, China implemented the Personal Information Protection Law (LPIP), a national data regulation inspired by the European GDPR. The LPIP requires companies operating in China to classify and store their data locally in the country, a fundamental element of establishing digital sovereignty. Under the law, companies that process data classified as “sensitive personal information” must request specific authorization from individuals, indicate why they process the data, and explain any effects of decision-making related to it. Like the GDPR, the LPIP gives more rights to Chinese consumers, while businesses are more strictly subject to national oversight and data-related controls, increasing trust in the digital economy.

In terms of AI regulation, China supervises recommendation engines through the Algorithmic Recommendation Management Provisions for Internet Information Services that came into effect in March 2022 and constitute the first such regulation. type all over the world. The law grants new rights to users, including the ability to opt out of using recommendation algorithms and delete their data. It also increases transparency about where and how recommendation engines are used.

However, the regulations go further with their provisions on content moderation, which oblige private companies to actively promote “positive” information that follows the official line of the Communist Party. This includes promoting patriotic and family-friendly content and focusing on positive stories aligned with the party's core values. Extravagance, excessive consumption, antisocial behavior, excessive interest in celebrities and political activism are subject to stricter control: platforms are expected to actively intervene and regulate these behaviors. Therefore, Chinese regulation of recommendation algorithms goes far beyond the digital space by dictating what type of social behavior China's central government considers favorable or unfavorable.

Unlike the United States, Chinese regulations give private companies the responsibility of moderating, prohibiting or promoting certain types of content. However, China's regulation of recommendation engines is not without complexity, both for companies and regulators, because the law can often be interpreted arbitrarily. Regulation may further accelerate the decoupling of practices of companies operating in China and international markets.

The central role of the State

In relation to innovation, the Chinese Government has strengthened private partnerships with the country's main technology companies. Several private companies, such as Baidu, Alibaba, Huawei and SenseTime, among others, have been elevated to the status of “national champions” or, informally, members of the “national AI team”, responsible for strengthening the Chinese AI ecosystem.

The result is that tech giants like Baidu, Alibaba and others have moved to the upper echelons of China's centrally planned economy. And precisely because of the importance of these companies for the social and economic development of the country, the Government is bringing them closer to the long-term strategic objectives of the Communist Party. Among these measures is experimentation with mixed forms of ownership. For example, government agencies acquire minority stakes in private companies through state venture capital funds and then fill board seats with members of the Communist Party. Other measures include banning sectors that do not meet the party's long-term priorities. One of them was the for-profit educational technology sector, banned in 2021 because the party wanted to curb inequality in education.

In China, the state is playing a central and growing role in adopting facial recognition technologies to surveil public spaces. According to the Government's own estimates, in 2020 there were up to 626 million facial recognition cameras installed in the country. Not surprisingly, huge demand from the public sector has helped China lead the global development of AI related to facial recognition. Meanwhile, civil society pressure continues to play a marginal role compared to what is happening in the United States, making it more difficult for the public to question government use of AI in society.

Industrial policy and social values

In contrast to the United States and the European Union, which have just launched new industrial initiatives and policies explicitly targeting semiconductors, China has long been nurturing its microprocessor industry. In 2014, for example, the National Integrated Circuit Industry Investment Fund was created with the goal of making China a world leader in all segments of the chip supply chain by 2030. Although the country remains Far behind the United States in semiconductor development, it is an area of ​​the AI ​​value chain that receives continued attention from the central government, as it is fundamental to the country's ambitions to achieve leadership in AI by 2030 .

When it comes to the intersection of AI and social values, China's latest five-year plan states that technological development aims to promote social stability. Therefore, AI should be considered as a tool of social control in “the great transformation of the Chinese nation”, which means maintaining a balance between social control and innovation.

Ideological differences between the three great powers could have broader geopolitical consequences for the management of AI and information technology in the coming years. Control of strategic resources, such as data, software and hardware, has become a paramount issue for policymakers in the United States, the European Union and China, giving rise to a neo-mercantilist approach to the governance of the digital space. This resurgence of neo-mercantilist ideas is clearly visible in the way in which trade in semiconductors is being restricted, but it is also evident in debates about international data transfers, resources linked to cloud computing, the use of software open source, etc. This evolution seems to increase fragmentation, mistrust and geopolitical competition, as we have seen in the case of communication technologies such as 5G. The United States, Canada, England, Australia and several European countries have excluded Chinese 5G providers, such as Huawei and ZTE, due to growing distrust over data security and fear of government surveillance of citizens. Chinese central

As technological decoupling deepens, China will seek to maintain its goal of achieving self-sufficiency and technical independence; especially, in relation to high-tech products from the United States. In May 2022, the Chinese Government gave a period of two years for central government agencies and state-subsidized companies to replace computers from foreign manufacturers. That includes the phasing out of the Windows operating system, which will be replaced by the Kylin operating system, developed by China's National University of Defense Technology.

Regarding open source repositories such as GitHub (owned by Microsoft), China has also indicated that it intends to reduce its dependence on open source software developed abroad. In 2020, for example, the Ministry of Industry and Information Technology publicly endorsed Gitee as the Chinese national alternative to GitHub. While the development of leading open source deep learning frameworks continues to be led by American tech companies such as TensorFlow (Google) and PyTorch (Meta), Chinese alternatives developed by national champions such as PaddlePaddle (Baidu) and Mindspore (Huawei), among others continue to grow in scope and importance within China. Such advances illustrate that achieving self-sufficiency in open source software development (such as deep learning frameworks) is on the Chinese government's policy agenda and fuels its long-term desire to achieve digital sovereignty.

Although the United States and the European Union diverge on AI regulation, with one focusing on self-regulation and the other on comprehensive regulation of the digital space, both continue to share a fundamental approach to AI based on respect for the human rights. That approach is gradually being put into practice to condemn the use of AI for surveillance and social control, as we see happening in China, Russia and other authoritarian countries. To some extent, American and European values ​​are evolving into an ideological mechanism that aims to ensure a human rights-centered approach to the role and use of AI. In other words, an alliance is forming today around a human rights-oriented vision of socio-technical governance, which is adopted and promoted by like-minded democratic countries. This point of view largely determines the way in which public sector authorities must relate to and manage the use of AI and information technology in society.

On May 15, 2022, the CCT of the United States and the European Union held its second summit, this time in Saclay, a commune located near Paris and one of the main business and research parks in France. Secretary of State Antony Blinken and Vice President Margrethe Vestager met again to promote transatlantic cooperation and democratic approaches to trade, technology and security. The meeting served to reinforce the transatlantic strategic relationship in several specific areas; among them, the exchange of detailed information on exports of critical technology to authoritarian regimes such as Russia. The United States and the European Union will also commit to greater coordination in the development of evaluation and measurement tools that contribute to the credibility of AI, risk management, and privacy-enhancing technologies. Additionally, a strategic information mechanism on standardization will be created to enable greater exchange of information on international technology standards, an area in which China is expanding its influence. Additionally, an early warning system is being discussed to better predict and address potential disruptions in the semiconductor supply chain. That debate includes the development of a transatlantic approach in order to continue investing in long-term security of supply for the EU and US markets.

As the CCT slowly cements the importance of the US-EU democratic transatlantic alliance on AI, the gap between the US and China appears to be widening. Thus, the world is gradually moving away from a liberal orientation based on global interoperability, and technological development is increasingly involved in competition between the governments of the United States and China. Such developments reduce the prospects for finding international forms of cooperation on AI governance and could contribute to a balkanization of technological ecosystems.

The result, already partially underway, will be the emergence of a Chinese network with its digital ecosystem, one American and one European, each with its own governing rules and peculiarities. In the long term, that may mean it will be much harder to agree on how more complicated forms of AI should be regulated and governed. Currently, the European Union and China appear to agree on taking a more active approach to regulating AI and digital ecosystems compared to the United States. However, the situation could change if the United States passes the Algorithmic Accountability law. Like the European Union's LIA, the Algorithmic Accountability law requires organizations to conduct impact assessments of their AI systems before and after their deployment, including providing more detailed descriptions about data, algorithmic behavior and forms of supervision.

If the United States decided to pass such a law, the regulatory approaches of the European Union and the United States would be more aligned. Now, although regulatory regimes may eventually align, the current trajectory of digital fragmentation between the European Union and the United States, on the one hand, and China, on the other, will continue in the current political climate.

There is no doubt that AI will continue to revolutionize society in the coming decades. However, it remains uncertain whether the countries of the world will be able to agree on how to apply technology to obtain the greatest possible social benefit. With increasingly powerful forms of AI emerging in a broader range of use cases, ensuring international alignment of AI could be one of the most important challenges of the 21st century.

Benjamin Cedric Larsen is head of the AI ​​and Machine Learning Project at the World Economic Forum.