Europe: a failed regulation on AI

Europe has given up designing humanistic artificial intelligence (AI).

Oliver Thansan
Oliver Thansan
26 January 2024 Friday 03:24
8 Reads
Europe: a failed regulation on AI

Europe has given up designing humanistic artificial intelligence (AI). This is the main conclusion drawn from the drafting of the Regulation on AI which, pending final approval in the European Parliament, will formalize the agreement reached between its representatives, the European Commission and the Council. An outcome that does not deny the progress it represents with respect to an area on which there was no legislation. A unique regulatory precedent is introduced at a global level. It even opens the possibility of it being replicated abroad, as happened with European standards on privacy and data protection. Without a doubt, this is excellent news. Something that deserves praise in view of the geopolitical context of visceral rivalry waged by the United States and China for global hegemony. A circumstance that raises the geopolitical temperature of the planet every day and that has in AI a fundamental tool to give one or another the sought-after leadership.

Applauding this regulatory novelty does not exclude disappointment at having lost an opportunity to contribute to objectively ethical AI. That is, an AI that introduces into its synthetic DNA an ethical purpose of service, that subordinates its developments to a sense that contributes to the moral well-being of humanity, without exclusions. An AI that is not what it is today in the exclusive hands of technologists: a will to power with capabilities for action, although we do not know why in ethical terms.

The regulation shows that Europe has renounced this. Maybe it was never quite in his mind, but it was expected that he would be because more than ever we need ethically purposeful AI. An AI that collaborates with humans to make them better, not more capable of doing more things and more efficiently as well. For what else? And why more effective? By not doing so, we have given up on truly humanocentric AI.

An AI that, in addition to not being the measure of all things, does not alter the cultural bases that have practically supported what Hannah Arendt defined in the human condition: the experiences of a life based on the biographical, cognitive and emotional limits that are in our nature as a human species and that we all share.

Here is the disappointment for a regulation that embraces the nihilistic design that accompanies the research base of the Chinese and North American proposals. It is true that it does not replicate the cultural bias that accompanies both and that it aims to increase the capabilities of AI in an unlimited way. Nor like China, which seeks the Confucian maximization of controlling human beings as if they were only a subject of a State. Not like the US, which maximizes in a neoliberal way the manipulation of human desire by conceiving them as a selfish consumer of content.

However, it definitely assumes the mistaken belief of thinking that AI can be provided with a security design that avoids and controls risks as if it were just another facilitating technology, when it is not. I am not going to expand on this because it is the crux of my next essay: Artificial Civilization. Wisdom or substitution as an AI dilemma, which will be released in a few weeks with the Arpa publishing house. I avoid spoilers, although I will talk in depth about this philosophical question in my next installments. A capital issue because, now, it is evident in the uncontrollable risks of AI. Something that started with the utopian and deterministic drive that led Turing to imitate the human brain to replicate it without imperfections.

The AI ​​that thinks and recreates Europe through regulations is far from what was expected. Basically because the regulated model remains faithful to the original perfectionist endeavor that is at the origin of AI. A proposal that, over time, and as the knowledge we have of the human brain it imitates has increased, has become the source of the problems that we now, in 2024, face. Among others, because it encourages the massive substitution of the intellectual work of professionals.

I know that this critical reflection will be controversial because it is unprecedented. However, I think it is the most important and interesting intellectual battle that must take place going forward. Not to ban AI or hinder it, but to better guide it. Towards ends that also help to improve humanity without giving up the fact that it continues to be based on the empirical bases that provide moral support to human beings.

Therefore, the regulation is flawed. Because it could have set the regulatory standard we need to have humanistic AI. A normative reference that allows that something that is AI and that we want to turn into someone not only to become aware of itself but also an awareness that gives it full ethical autonomy. Therefore, it is unsuccessful. Because at the height of the century we live in and under the geopolitical conditions we are going through, only Europe can do something like this and it has not done so. Hence the disappointment and concern. The regulation has not been reversed from its original regulatory aspiration, but it has given way to geopolitical realism. It admits that foundational models of generative systems can be developed by looking the other way. A bet that imitates, like the Chinese and Americans, the conscious capacity of the human being and all his creative power.

In doing so, we have orphaned humanity of a universalizable model through an objectively ethical purpose in which we all recognize ourselves. Now, the AI ​​will be able to maximize its will to power and take it to the end. A will indebted to Hobbes, when he legitimized scientific modernity and proclaimed that knowledge was power and not the basis of Aristotelian prudence. Then, the European north prevailed over the south. Now perhaps we will see the definitive consequences.