2024, year of artificial intelligence

This 2024 will be the year of artificial intelligence (AI).

Oliver Thansan
Oliver Thansan
29 December 2023 Friday 03:25
5 Reads
2024, year of artificial intelligence

This 2024 will be the year of artificial intelligence (AI). Above all, after 2023 confirmed that it is entering a critical phase due to the extraordinary advances in its development. A fascinating moment, which exposes us to risks and uncertainties that require establishing a minimum governance as soon as possible with which to face the future with relative confidence, because it will be impossible to fully guarantee it.

The reason is that two factors put pressure on AI that prevent it from being bridgable. On the one hand, the geopolitical pulse of the US and China over the increase in their capabilities. A rivalry that escalates as both internalize that the technological and planetary hegemony for which they fight is subordinate to whoever leads their research. In fact, having more competitive companies, more lethal weapons and more effective governments in their capacity for social control will depend on it. And on the other, the utopian gene that beats in the synthetic DNA of AI. A more determining factor than we assume, since it has operated since its birth 70 years ago and gives it the nature of a technology that goes beyond a facilitating technology. It is a finalist technology that wants something to become someone. To do this, it provides it with a more and more powerful statistical intelligence that aspires to achieve mental states similar to those of its creator.

The sum of both factors releases an increasingly disturbing outcome. What happened the year he left us confirms this. In which relevant initiatives have proliferated that have insisted on the existence of increasingly serious risks in AI research. The surprising thing is that they are never specifically detailed, but it is pointed out where they may come from. As is the case with the manifesto signed by more than a thousand AI scientists on March 29. He says that “advanced AI can represent a profound change in the history of life on Earth and should be carefully planned and managed.” Something that is not happening due to the aggressive competition that companies and countries maintain. To the point that it resembles a “race out of control” that can give rise to “increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict or control.”

There will be those who think that it is something exaggerated. Especially since the text concluded with the request for a moratorium on the investigation that would allow us to recapitulate where it was and where it was going. If we analyze what the technology corporations themselves that lead the development of AI in the US say, we will see the concern they also express about their research. They do not do so with the dystopian overtones of scientists, but with evident concern, as they insist in the declaration they signed on July 21: it is urgent, at least, for self-regulation of it that puts limits on its development. Among other things, because “innovation cannot come at the expense of the rights and security of Americans.” Hence, the signatories – Google, Meta, Microsoft, OpenAI, Amazon, Anthropic and Inflection AI – commit to being transparent when testing the security of the developments of their AI systems, to making the results of control tests public and to avoid biases that produce discrimination and violation of privacy and intimacy.

Theses are in line with the US presidential executive order of October 31 of this year as well. In it, in addition to turning the tenant of the White House into an “AI commander in chief” who supervises and coordinates public and private research in this field, it insists that it must be used responsibly if it wants to become a promise. for the whole of humanity. Otherwise: it could lead us to “exacerbate social harms such as fraud, discrimination, prejudice and misinformation, marginalize and disempower workers, stifle competition and pose risks to national security.”

More forceful, if possible, are the explanatory statements that accompany both the Bletchley declaration of November 2 and the text of the common proposal agreed by the European Commission, Council and Parliament of December 7 on the AI ​​regulation. In all of them, the imperative need for governance that neutralizes the extraordinary risks that can accompany uncontrolled AI research is insisted upon. Risks even described as “catastrophic” for humanity.

I will not analyze in detail the text of the final proposal of the European regulation. It remains for the next installment, although I will highlight that Europe has been carried away by the anxiety of competing with China and the US in developing its own AI guided by a stroke of geopolitical realism. Forgetting that if Europe wants to be a global player, it must be so on behalf of all those excluded by the competitive logic that has led the Chinese and Americans to get rid of ethics, seeing it as an obstacle to research.

All in all, what is worrying about the aforementioned initiatives is that a governance design that seeks security in AI is addressed as if we were dealing with just another facilitating technology, when it is not. In doing so, we are victims of a scientific mentality that is incapable of appreciating that we are facing a utopian research bastion that wants to imitate the human brain to make it artificially perfect, regardless of the consequences of this. Here is the problem. In the Faustian mentality that accompanies an AI for which there will be no possible governance if it is designed to be a will to power in itself. A nihilistic AI for a world dominated by nihilism.