In the AI ​​revolution, are companies or governments in charge?

In early November, the UK hosted a high-level international summit on artificial intelligence (AI) governance.

Oliver Thansan
Oliver Thansan
21 February 2024 Wednesday 09:25
12 Reads
In the AI ​​revolution, are companies or governments in charge?

In early November, the UK hosted a high-level international summit on artificial intelligence (AI) governance. The summit was a positive response to the rapid and dramatic advances in AI, which present unprecedented opportunities and challenges for governments.

World leaders are keen not to miss out on a technological revolution that, ideally, could help them expand their economies and address global challenges. There is no doubt that AI has the potential to improve individual productivity and drive social progress. It could lead to important advances in education, medicine, agriculture and many other fields fundamental to human development. It will also be a source of geopolitical and military power, and will confer an important strategic advantage to the countries that obtain primacy in their development.

But AI also poses social challenges and risks, hence the growing voices demanding intervention and regulation by governments. Among other things, AI is expected to transform labor markets so that many workers become redundant and others become much more productive, thereby widening existing inequalities and eroding social cohesion. It will also be used as a weapon by malicious actors to commit fraud, deceive people and spread disinformation.

Used in electoral contexts, AI could compromise the political autonomy of citizens and erode democracy. And, as a powerful instrument for surveillance purposes, it threatens to undermine the fundamental rights and civil liberties of individuals.

All of these risks will almost certainly materialize, and there are also more speculative    but potentially catastrophic ones. In particular, some analysts warn that AI could get out of control and pose an existential threat to humanity.

To take advantage of the unprecedented opportunities offered by AI while managing potentially serious risks, divergent approaches are emerging to regulate the sector. Reluctant to interfere in the development of a disruptive technology that is central to its economic, geopolitical and military competition with China, the US has traditionally relied on the voluntary guidance and self-regulation of technology companies.

The EU, by contrast, insists that AI governance should not be left to companies, but that digital regulation should be based on the rule of law and subject to democratic oversight. To the existing set of digital regulations, the EU is about to add a comprehensive and binding regulation on AI that focuses on the protection of individuals' fundamental rights, including their right to privacy and non-discrimination.

China is also seeking ambitious AI regulation, but with authoritarian characteristics. The authorities aim to support the development of AI without weakening censorship or jeopardizing the Communist Party of China's (CPC) monopoly on political power. That implies a sacrifice, because to maintain social stability, the CCP must restrict content necessary to train the massive language models that support generative AI.

So the US, EU and China offer competing models of AI regulation. As the world's main technological, economic and regulatory powers, they are digital empires: not only do they each regulate their national markets, but they also export a regulatory model and aspire to shape the global digital order in their own interest. Some governments may align their regulatory stance with the US market-led approach and opt for light regulation; others may align with the EU's rights-led approach and seek binding legislation that places restrictions on AI development; and some authoritarian countries will look to China and emulate its state-centered regulatory model.

However, most countries are likely to fall somewhere between the three approaches and selectively adopt elements of each. That means no single model for AI governance will emerge around the world.

Although regulatory divergence seems inevitable, there is a clear need for international coordination because AI poses challenges that no government can manage alone. Greater harmonization of regulatory approaches will help all governments maximize the potential benefits of technology and minimize the risks.

If each government develops its own regulatory framework, the resulting fragmentation will ultimately hinder the development of AI. Ultimately, navigating conflicting regulatory regimes increases business costs, creates uncertainty, and undermines anticipated benefits. Consistent and predictable standards across markets will foster innovation, reward AI developers and benefit consumers.

Furthermore, an international agreement could help distribute the expected benefits more equitably among countries. AI development today is concentrated in a handful of (mostly) developed economies that are positioned to emerge as clear winners in the global AI race. At the same time, the ability of most other countries to take advantage of AI is limited. International cooperation is necessary to democratize access and mitigate fears that AI will benefit only a subset of wealthy countries and leave the global south even further behind.

International coordination could also help governments manage cross-border risks and avoid unbridled competition. Without such coordination, there will be actors exploiting regulatory loopholes in some markets, offsetting the benefits of well-designed guardrails elsewhere. To avoid regulatory arbitrage, countries with better regulatory capabilities would have to offer technical assistance to countries that lack it. In practice, this would mean pooling resources to identify and assess AI-related risks, disseminating technical knowledge about those risks, and helping countries develop regulatory responses to them.

Perhaps most importantly, international cooperation could contain the costly and dangerous AI arms race before it destabilizes the world order or triggers military conflict. In the absence of a joint agreement that establishes rules governing dual-use AI (civil and military), no country will be able to risk slowing down its own military development, lest it cede a strategic advantage to adversaries.

Given the obvious benefits of international coordination, several attempts are already underway to develop global norms or methods of cooperation within institutions such as the OECD, the G-20, the G-7, the Council of Europe and the United Nations. . However, it is reasonable to fear that such efforts will have only a limited effect. Given the differences in values, interests and capabilities between states, it will be difficult to reach a meaningful consensus. By the same token, the UK summit was never expected to produce legally binding commitments but, as predicted, endorsed general principles and committed the parties to continued dialogue.

Not everyone wants governments to succeed in their regulatory efforts. Some observers oppose governments even attempting to regulate a rapidly developing technology.

These critics usually make two arguments. The first is that AI is too complex and evolving too quickly for policymakers to understand and keep pace with. The second argument argues that even if policymakers were competent to regulate AI, they would likely err on the side of caution (over-intervention), which would stifle innovation and undermine the benefits of AI. If these critics are right, either fear would justify governments following the principle of primum non nocere (first do no harm), exercising restraint, and letting the AI ​​revolution take its course.

The argument that legislators are incapable of understanding such a complex, multifaceted and fast technology is easy to make, but it is still unconvincing. Legislators regulate many areas of economic activity without being experts in them. Few regulators know how to build airplanes, yet they exercise undisputed authority over aviation safety. Governments also regulate drugs and vaccines, although few, if any, policymakers are biotechnology experts. If only experts had the power to regulate, each sector would regulate itself.

Likewise, while the challenge of AI governance is partly about technology, it is also about understanding how that technology affects fundamental rights and democracy. That is not a field in which technology companies can boast. Let's think about a company like Meta (Facebook). Its track record on content moderation and data privacy indicates that it is one of the least qualified entities in the world to protect democracy or fundamental rights, as is also the case with most major technology companies. Given what is at stake, it is governments, not developers, who must take the lead in AI governance.

That does not mean that governments will always get regulation right, nor that regulation will not force companies to divert resources from research and development towards compliance with standards. However, applied correctly, regulation can encourage companies to invest in more ethical and less error-prone applications, thereby guiding the industry towards more robust AI systems. This would increase consumer confidence in the technology and expand (rather than reduce) market opportunities for AI companies.

Governments have every incentive not to give up the benefits associated with AI. They urgently need new sources of economic growth and innovations that help them achieve better outcomes at lower costs (such as improved education and healthcare). In reality, they will most likely do too little for fear of squandering a strategic advantage and losing potential benefits.

The key to regulating any multifaceted and rapidly evolving technology is to work closely with AI developers to ensure that potential benefits are preserved and that regulators remain agile. However, it is one thing to consult closely with technology companies and quite another to hand over governance to the private sector.

Some analysts are not very concerned that governments do not understand AI, or that they get it wrong in regulating it, because they doubt that government action matters much. The technodeterminist camp argues that governments ultimately have only limited ability to regulate technology companies. Since the real power lies in Silicon Valley and other technology centers where AI is developed, it makes no sense for governments to engage in a fight that they will lose. High-level meetings and summits are destined to be sideshows that ultimately do nothing more than allow governments to pretend they are still in charge.

Some analysts even maintain (not without conviction) that technology companies are “new rulers” that “exercise a form of sovereignty” and that they give way to a world that will not be unipolar, bipolar or multipolar, but rather “technopolar.” In fact, large technology companies wield economic and political influence greater than that of most states. The technology sector also has almost unlimited resources with which to lobby for regulations and defend itself in legal battles against governments.

However, all this does not mean that governments lack power in this area. The State remains the fundamental unit around which societies are built. As political scientist Stephen M. Walt recently said: “What does one expect to still be around a hundred years from now? Facebook or France?” Despite all the influence amassed by technology companies, governments still have the ultimate authority to exercise coercive force.

That authority can be deployed (and often has been deployed) to change the way businesses operate. Terms of use, community guidelines, and any other rules written by Big Tech are still subject to laws written by governments, with the authority to enforce them. Companies cannot separate themselves from governments. Although they may try to resist and shape government regulations, they must ultimately obey them. They cannot force mergers against the objections of antitrust authorities, refuse to pay digital taxes that governments decree, or offer digital services that violate the laws of a jurisdiction. If governments ban certain AI systems or applications, technology companies will have no choice but to comply with the ban or stay out of that market.

This is not a mere hypothesis. Earlier this year, Sam Altman of OpenAI (which develops ChatGPT) warned that his company may not offer its products in the EU due to regulatory restrictions. However, a few days later he backed down. OpenAI's sovereignty is limited to the freedom not to do business in the EU or any other jurisdiction whose regulations it opposes. You are free to exercise that option, but it is an expensive option.

The question, therefore, is not whether governments can govern the digital economy, but whether they have the political will to do so. Since the commercialization of the Internet in the 1990s, the US government has chosen to delegate important governance functions to the private sector. Such a techno-libertarian approach is famously manifested in Section 230 of the Communications Decency Act of 1996, which exempts online platforms from liability for any third-party content they host. Still, even within that framework, the US government is not powerless. Although he gave free rein to platform companies with Section 230, he retains the authority to repeal or modify the law.

There may not have been much political will to do so in the past, but momentum for regulation is growing as confidence in the tech sector declines. In recent years, US lawmakers have proposed bills not only to rewrite Section 230, but also to revive antitrust laws and establish a federal privacy law. And some lawmakers are now determined to regulate AI. They are holding hearings and already proposing legislation to address recent advances in generative AI algorithms and massive language models.

Now, although Democratic and Republican members of Congress increasingly agree that technology companies have become too powerful and must be regulated, they are deeply divided over how to do so. For some, concerns that AI regulation could undermine American technological progress and innovation are important at a time of intensifying competition between the US and China. And of course, companies continue to lobby aggressively and effectively, indicating that even a bipartisan anti-tech crusade may change little in the end. As great as the discontent is regarding technology companies, the political dysfunction in the US Congress could be greater.

Again, that doesn't mean governments aren't in charge. In the absence of legislative action from Congress, the White House recently issued an executive order on the safe and trustworthy use of AI. That indicates that the Biden administration is willing to move the United States toward greater regulation, even without full support from Congress.

The EU, for its part, is not hampered by the same political dysfunction, and its recent legislative record is impressive. After the adoption of the General Data Protection Regulation (GDPR) in 2016, it has gone on to regulate online platforms with its historic laws of 2022: the Digital Services law and the Digital Markets law, which establish clear rules regarding regarding content moderation and market competition, respectively. And an ambitious Artificial Intelligence law is expected this year.

However, despite the EU's legislative success, the implementation of its digital regulations has often failed to achieve the measures' stated objectives. The implementation of the GDPR, in particular, has attracted much criticism; and all the hefty antitrust fines imposed by the EU on Google have done little to reduce its dominant position. Such failures have led some to argue that tech companies are already too big to regulate, and that AI will further entrench their market power and further strip the EU of its ability to enforce its laws.

The Chinese government, of course, does not face that problem. Without needing to adjust to a democratic process, it was able to take drastic and sudden measures against the country's technology sector starting in 2020, to which the companies promptly capitulated. That relative success in holding tech companies accountable offers a stark contrast to the experience of European and US regulators. In both jurisdictions, regulators must fight long legal battles against companies that are dedicated to challenging, rather than complying with, any regulatory action taken.

The same pattern could repeat itself with AI regulation. The US Congress is likely to remain deadlocked, with heated debates but no real action. It is also unclear what will happen to the executive order after the next US presidential election, as a new president could revoke it; and the EU will legislate, but continued uncertainty about the effectiveness of its regulation could lead to a result similar to the American one. In that case, technology companies, and not democratically elected governments, will be free to shape the AI ​​revolution as they see fit.

These scenarios raise a disturbing possibility: only authoritarian regimes are capable of effectively governing AI. To refute such a claim, the United States, the EU, and other like-minded governments will have to demonstrate that democratic governance of AI is feasible and effective. They will have to insist on their role as senior legislators.

The British summit can be seen as a positive step towards international collaboration on AI governance. Despite these advances, it is clear that truly global standards will not be reached soon. The disagreements are still too deep for countries (especially the so-called techno-democracies and techno-autocracies) to act in unison. However, the summit has perhaps been very fruitful in sending the signal to technology companies that they must remain subject to governments, and not the other way around.

While working closely with companies to foster innovation in AI and maximize its benefits, democratic governments must also ensure the protection of citizens, values ​​and institutions. Without this kind of double commitment, the AI ​​revolution is much more likely to live up to its dangers, not its promises.

Anu Bradford is Professor of Law and International Organization at Columbia University Law School. She is the author of 'Digital empires: The global battle to regulate technology' (Oxford University Press, 2023).