Jordi Torres: “AI still has no conscience or common sense”

Siri, Alexa, Google, Twitter or Amazon, and now also Chat GPT or Gemini, are the names of the artificial intelligence that is in our phones and homes.

Oliver Thansan
Oliver Thansan
25 December 2023 Monday 09:26
6 Reads
Jordi Torres: “AI still has no conscience or common sense”

Siri, Alexa, Google, Twitter or Amazon, and now also Chat GPT or Gemini, are the names of the artificial intelligence that is in our phones and homes. For more than a decade, we have lived surrounded by machines with 'brains' that silently assist us in our daily lives, but recently we began to hear fears about the awakening of these systems. Can machines become conscious? Can they displace humans? In what areas do they contribute to improving humanity? What are the main risks and challenges?

Jordi Torres, professor at the Polytechnic University of Catalonia (UPC), part of the founding team of the Barcelona Supercomputing Center (BSC) and renowned researcher in the Department of Computer Science, publishes the book Artificial Intelligence explained to humans (Plataforma, 2023) to respond to these and other questions.

Let's start at the beginning, what is Artificial Intelligence?

As a concept it has existed since the middle of the last century. Alan Turing, in the mid-20th century, laid the foundations of AI. Although the one that is considered the first algorithm was written by Ada Lovelace in the mid-19th century, which is why she is considered the first programmer. After decades of research and development, in 1997, an AI was able to beat the best chess player in the world at that time. AI is based on three pillars: data, algorithms and machines. It is the computing of a lifetime.

He explains in his book that human-machine interaction has been with us for a long time, why is there so much talk about AI now?

In November 2022, the OpenAI company decides to launch ChatGPT for free, the well-known conversational bot that emulates the logic of human thought. The fascination it aroused was such that in just two months it already had 100 million users. But the technology behind ChatGPT already existed before, although there were no machines powerful enough to develop it, no algorithms with billions of parameters, nor huge amounts of data to train it.

Sam Altman's dismissal and subsequent reinstatement at OpenAI has been attributed to the possibility that the company may have developed technology that could threaten humanity. Can AI have consciousness?

Absolutely not. Current AI does not have consciousness, nor even common sense, which is what allows humans to make decisions in uncertain environments. Nor does he understand the cause-effect relationship, which is something basic to reason. That is, he does not know if the rooster crows because the sun rises or the sun rises because the rooster crows. He knows what is happening but not why it is happening.

How does the machine learn?

Current AIs can only solve specific problems such as writing text or creating an image. This is what is known in the literature as narrow or weak AI. But it is no small thing: these AIs are as good or better than a human for these specific problems. It is weak learning based on statistics. Chat GPT knows that after the word 'Barack' comes 'Obama', because that is what has happened most of the time it has seen that word. It is something similar to the WhatsApp corrector.

The prestigious mathematician Marcus du Sautoy recently assured that in the future AI could be conscious, and that if this were to happen it should be treated as a species and have rights.

The dream of creating a human-level Artificial General Intelligence (AGI) is the Holy Grail of scientists, equivalent to asking the origin of life. 'It seems possible that, once the machine's method of thinking has taken off, it should not take long for it to surpass our human capabilities.' The phrase is from Alan Turing himself, who uttered it in 1951. But, as Stuart Russell, one of the most renowned researchers in the field, says, we are going to need several Einsteins for this to happen.

Let's then deal with today's AI.

AI has been with us for a long time and we interact with it daily through our mobile phones. For example, when we consult information through social networks, we search for entertainment on streaming platforms or we use GPS. AI is already deciding for us in many moments. We have to be aware of this and ask ourselves if we pay attention to it or not.

In what areas does AI already represent palpable improvements for citizens?

In medicine, for example, where it is already used to design new drugs that help cure diseases. It also improves diagnostic capabilities.

What are the biggest dangers?

One of the most worrying aspects is the lack of veracity, because generative AI is capable of responding with false information as if it were true, either because the training data is not updated or because it has lost information in the coding process. Therefore, any work performed by a machine has to be supervised by a human. Other dangers are biometric surveillance, which should be limited, and autonomous weapons, which should be banned outright.

Until a few years ago we did not believe that AI would affect creative jobs immediately. It seemed that “only” those routine jobs were in check. How will the market adapt to these changes?

The wave of generative AI has only just begun and will soon be present in all aspects of our lives. An unprecedented change in job profiles is expected that will affect all sectors, including complex cognitive tasks. AI has the potential to turn the current job market on its head, but we will get through it. The important thing is to understand it and decide where we want to go.

Europe has just reached an agreement to regulate AI. In the United States, they plan to do something similar. Are you in favor of regulation?

Current AIs are considered black boxes because we don't know how they make some of the decisions. Ethics must be incorporated into technology. Companies would have to be more transparent in this sense, as well as openly explaining where they extract the data from. Anything generated by a machine would have to be specified and supervised by a human. Europe, in addition to regulating, should develop its own technology and have its machines so as not to be left behind.