AI: Dancing on the Volcano

It is difficult to imagine the paradigm shift that the irruption of generative artificial intelligence (AI) means.

Oliver Thansan
Oliver Thansan
29 April 2023 Saturday 15:38
10 Reads
AI: Dancing on the Volcano

It is difficult to imagine the paradigm shift that the irruption of generative artificial intelligence (AI) means. ChatGPT is a drop in the ocean, in the midst of a Cambrian explosion of AI startups and applications that will soon reach every aspect of our lives. The revolution lies in the fact that these systems (unlike classical programming) are capable of doing things for which they were not designed. They learn and develop new properties autonomously. They are not deterministic. Until now, computers were programmed according to a decision tree that a human coded and transferred to the machine. Her behavior was limited to the program. That is why many still believe that it is impossible for an algorithm to be "intelligent" or creative.

Today the approach is different: the great language models (such as GPT) are massive digital networks that try to emulate the human brain and that are trained by “reading” documents (digesting all the human knowledge accessible on the web: millions of texts, books, blogs and scientific articles). These "neural networks" are made up of millions of equations that self-adjust as they learn, eventually generating coherent sentences. Once ready, nobody (not even their creators) knows exactly what happens inside there. One way or another, linguistic ability (and hence an apparent ability to reason) emerges. Like a child who learns to speak by babbling, the AIs progressively configure a melody of linked words, each time more coherent and logical, until they reach the generation of language and the handling of complex concepts. Systems like GPT have surprised the world with their ability to create articles, essays, poems, or computer code, autonomously.

We are not talking about pre-programmed algorithms, with an automatic and repetitive response, but about complex entities capable of developing increasingly elaborate cognitive abilities, based on bits and artificial neurons. When DeepMind, a startup owned by Google, trains an AI to control a robot-soccer player, and makes several of these robots play, a superior cooperative capacity gradually appears, initially not programmed by humans. It is an emergent property. A neuron is nothing. But from a sufficiently large set of biological neurons emerges, in a way not yet explained by science, a human mind.

Something similar is happening in these AIs: when artificial neural networks are scaled to a gigantic level (something only possible with the new generations of computers), something similar to an incipient synthetic mind emerges. These are disturbing and exciting phenomena, which are already occurring in the biological world, and which are giving rise to countless AI research studies. What is surprising is that today we are studying these new AIs as if they were living beings, trying to understand their behavior when faced with different stimuli. Do you have a theory of the world? Do they have a notion of space and the shape of objects? Can they relate or predict events? Microsoft researchers have just published the article Flashes of general artificial intelligence, proving that GPT4 has cognitive capacity that exceeds language, and that extends to the domains of mathematics, vision, medicine, law or psychology. Flashes of almost human intelligence.

We will get used to dealing with intelligent, creative and socially capable systems. And this is going to change the rules of the game in many sectors: soon we will have mobile applications, medical diagnoses, graphic designs, computer programs, musical compositions, or strategy reports made at zero cost by synthetic intelligences. At the time that I am writing this article, an AI would have written hundreds, emulating me, with my same communication style. AI is already capable of autonomously planning and carrying out scientific research. It could measure forces and speeds of falling bodies, and induce the law of gravity. We would not have needed a Newton. Perhaps an AI that read tens of thousands of articles on fundamental physics would be able to discover a new physical law, before a new Einstein emerges.

It is debated whether generative AI is one of the great inventions in human history, on a par with the printing press or the internet. I have no doubt of the immense disruptive power of this AI, which is only in its infancy. On a philosophical level, we must be able to accept (or not) that a digital system is capable of near-human or superhuman intelligence (and perhaps consciousness?). After all, intelligence or consciousness are not exclusive to human beings. No one doubts that an octopus, separated from our evolutionary tree more than 500 million years ago, is intelligent and conscious. We admit that a worm, with 300 neurons, has some kind of consciousness. Can we ensure that a complex virtual entity will never have it? Is a biological body necessary to be intelligent, or conscious? We do not know. And if, one day, an AI passes all human intelligence tests (as GPT is already doing), and swears to us that it is conscious and feels emotions, we will have no way to prove it. Perhaps, like all conscious beings, we should then consider granting it another moral status?

However, the current problem is much more pragmatic. Three things worry me: 1) the disruptive impact that AI will create in the labor market (will as many jobs be created as they will be destroyed?), 2) the AI ​​race is more oriented towards breaking records of scale (at the size of the model) than to guarantee their security, and 3) this occurs at a time when the world is splitting up into blocks and democracies are in retreat. We dance on a volcano of disruption, but are we building AI aligned with human and democratic values? I don't believe in doomsday scenarios, of world takeover by an evil AI. But I am terrified of evil or irresponsible humans taking control of AIs.