“I am more concerned about human stupidity than artificial intelligence”

When La Vanguardia first interviewed Ramón López de Mántaras, at the time when the Deep Blue computer defeated the then world chess champion, Garri Kasparov, for the first time, he spoke with enthusiasm and hope about the future of artificial intelligence.

Oliver Thansan
Oliver Thansan
09 September 2023 Saturday 10:23
5 Reads
“I am more concerned about human stupidity than artificial intelligence”

When La Vanguardia first interviewed Ramón López de Mántaras, at the time when the Deep Blue computer defeated the then world chess champion, Garri Kasparov, for the first time, he spoke with enthusiasm and hope about the future of artificial intelligence. In his new book, 100 things you need to know about artificial intelligence (Ed. Cossetània), published this week, he conveys concern.

Why are you more pessimistic now?

Then the situation was very different. We were much fewer people working in this field and there was a lot to do. We had a hopeful and excited attitude. The illusion has not been lost. But we view with concern some of the social and geopolitical impacts of artificial intelligence.

Let's talk about what worries citizens the most. Will a large part of the population be left without work?

Nobody has the answer. Many times people talk about jobs, I think it is a mistake. We have to talk about tasks within a job. Some tasks will be replaced by machines, others will not. For example, detecting patterns on an x-ray is something a machine can do well. But a radiologist does much more than detect patterns.

Does artificial intelligence threaten the survival of humanity?

For me this is a completely wrong opinion. It implies that there will be superintelligence that will surpass us in everything. Why would they make the decision to extinguish humanity or enslave us? Artificial intelligence does not have its own objectives, we are the ones who give it objectives. It is true that we can lead you to make counterproductive decisions. But of all the possible causes of humanity's extinction, for me artificial intelligence is way to the bottom. I am much more concerned about climate change and war conflicts.

So what are the risks of AI that worry you?

We already have enough risks on the table. For example, the impact of large language models [like ChatGPT] that generate a multitude of falsehoods. Images, videos, texts can be generated maliciously, and create conflicts, create tensions at a social level or influence democratic elections. For me this is a very important risk and has no technical solution.

Any other major risks?

That the ability to do all this is in the hands of a few, of some large technology companies, five or six, that have more power than many states and that are the ones that are setting the guidelines for where artificial intelligence is going. And what is also contradictory is that some of them present themselves as possible saviors of all these problems.

Who are you thinking of?

At Elon Musk. Or Sam Altman from OpenAI [the ChatGPT company], who says “be careful, all this is very dangerous, but I'm not, I'm a good person, trust me so that I can avoid these dangers”. This is very perverse, because what interests him is the profit of his company. What he is doing is trying to get it regulated in order to cut the wings of open source. Because alternatives to the great open source language models have already emerged, that is, collaboratively developed, and I think they are afraid of losing market share.

What worries you more, artificial intelligence or human stupidity?

Much more human stupidity. The problem is not Frankenstein's monster, the problem is Dr. Frankenstein who creates the monster.

Does this mean that AI can be our Frankenstein, the humanized monster that escapes the control of its creators?

In certain cases, we could say it. But, if it escapes control, it will not be because artificial intelligence has its own objectives, desires and beliefs. It will be because there are people who cause it, who are interested in making this happen. A machine has no moral agency, people are ultimately responsible for everything.

Do you take any particular precautions when interacting with artificial intelligence as a citizen?

I try to do what I call a digital diet. I think that we are digitizing badly, that we are expelling many people from the system, not only for age reasons, but also for economic reasons. And the public administration itself is doing it very badly, by providing practically no non-digital alternatives for many procedures.

What does your digital diet consist of?

In spending many more hours in the analog world. That is, talking face to face with people, few social networks, using machines with common sense... If I have to look for information and Wikipedia can be useful, I use it. But I'm not going to look for everything digitally. I also try to keep my privacy as much as possible by disabling cookies, saying no to permissions to know what you are doing and detect your profile...

Do you prefer to browse in incognito mode?

Yes, absolutely. If I access Wikipedia, it gives me the same. But I often use the incognito mode.

Do you cover the camera to prevent facial recognition?

Generally yes. If I do a video conference meeting, then I usually don't cover it, because I have a certain confidence. But if not, I do cover it.

Any particular precautions when shopping?

I make very few purchases online. There is a privacy aspect, because what you buy is important information about your preferences. But also because it is part of the digital diet. I like going to buy a book at a bookstore and interacting with people more than buying it online. This element of socialization is important to me.

Can you geolocate?

Generally not. For privacy. Obviously, if I think it will make things easier for me to get to my destination sooner or see where there is a parking lot or a gas station, I have to enable the location. But I do it only for very specific cases.

What solutions do you propose to protect the positive applications of AI and limit the negative ones?

Whenever possible, use the resource of education and information. Future engineers at least have to be able to ask themselves: “this is technologically feasible, but should I do it?”

Don't you think Musk and Zuckerberg have already considered this?

I don't believe it. They follow another maxim. Do things quickly even if you break them. The problem is that the things you break can be lives.

Any other solution if education is not enough?

The other is a well-made regulation, which comes from the bottom up, where all interested parties are consulted, and not one that comes from half a dozen large technological platforms. The model that the European Union is creating with the AI ​​Act is on the right track, because a huge number of people who may be affected by the applications of artificial intelligence are being consulted. I do not believe that this regulation can delay technological progress in Europe, as some say. On the contrary, it can be an added value because you can say "my product is more reliable." It's like buying a medicine that has passed all the controls. The fact that artificial intelligence is regulated and certified can be an advantage at the market level.