"It's good that children know about AI, but their level of dunks is more important" Josep Maria Martorell

The blazing irruption of generative artificial intelligence (AI) has caught more than one in its stride.

Oliver Thansan
Oliver Thansan
10 June 2023 Saturday 11:06
5 Reads
"It's good that children know about AI, but their level of dunks is more important" Josep Maria Martorell

The blazing irruption of generative artificial intelligence (AI) has caught more than one in its stride. This is why a calm reflection on this new technology is so necessary, which, along with others, plays the role of the fourth industrial revolution. To this end, and as part of the Annual Meeting of the Economic Circle, La Vanguardia brought together two specialists in the field: Nuria Oliver, telecommunications engineer and director of the ELLIS Alicante Foundation, and Josep, deputy director of Barcelona Supercomputing Center (BSC). What follows is the result of their face-to-face, moderated by Miquel Molina, deputy director of La Vanguardia.

Are you surprised that generative AI has become a social phenomenon?

Nuria Oliver (NO): What has been surprising is the massive adoption of ChatGPT and its presence on the front pages of the media almost every day. ChatGPT highlights many of the ethical challenges and limitations of current AI systems, and has also highlighted the issue of regulation, which is being worked on in Europe since 2021.

Would it have been interesting to follow with AI the process that applies to drugs, which go through the regulatory agencies and obtain permits?

JMM: Here there is always the question of how much you regulate and how much you let it innovate. First of all, we are talking about software, and when the subject of the debate is software, the rules are different, it is a little more fluid. The challenge is to find the balance between not being irresponsible in opening everything up and also not holding back innovation, without which there will be no progress.

NO: I agree, but it is also true that we are all part of this great social experiment that involves hundreds of millions of people and we do not know what impact it may have. In the case of generative AI, we don't know what it will actually generate, because the universe of things it can create is virtually limitless, making regulation even more difficult. Rather than regulating the technology, what we should do is regulate its impact. It is necessary to abstain from the specific methods that are being used, but not to accept a system that discriminates, that manipulates human behavior, that violates privacy, that uses data without consent... There are a number of basic principles that have been to comply regardless of what underlying technology you use. But clearly, how do you implement it? How do you define risk? Who determines it? How do you implement the regulation and make sure it is being followed? This is where the complexity really lies, and until the European regulation starts to work it will be very difficult to anticipate anything.

JMM: There is an important question, and that is to know to what extent the very existence of a regulation, which we still do not know how it will be implemented, will give little incentive to the installation of certain initiatives in Europe. An entrepreneur in AI matters will find it easy, oversimplifying, to set up elsewhere with less regulation. This is what we will not know until we see how the European regulation is executed.

Should a subject be implemented in primary school to introduce this discipline to students?

JMM: I'm pretty skeptical about that. As an anecdote, the other day the people in charge of a school in Barcelona came to see me and asked me if I thought it would be interesting for the 5th and 6th graders to do a quantum computing module. The first thing you think is: "Well, the people in charge are very worried about what will happen to these kids in the future"; but the second thing that crosses your mind is: "Okay, let's not exaggerate either." I tend to think that going back to basics is always a good idea. It's fine to worry about elementary school children understanding what AI is, but the first thing we should be concerned about is the level of math they will have.

NO: I have been defending two elements regarding a possible reform of compulsory education for a long time. On the one hand, I do think it would be valuable to introduce a core subject in computational thinking, which would not require the use of computers.

JMM: That's something else, yes.

NO: Computational thinking involves the development of five skills, which are very important and useful in life: the first is algorithmic thinking, that is, teaching how to solve complex problems by decomposing them into steps and modules that follow each other in time; the second is programming, because in the end it is the language of technology; the third is the data. There is a great deal of ignorance about all issues related to data, including how the data generated by children and adolescents is used through the networks; the fourth is networks, because we live in a world of networks, and the fifth and last is hardware, which would be a minimum knowledge of all the technological devices we use. That would be a bit of a what. But it is also important to strengthen skills that we know have been key to the survival of Homo sapiens and that we may not develop enough, such as critical thinking, creativity and all the skills of social and emotional intelligence. For me, these two areas are important and I have my doubts that the children and teenagers of the 21st century have the necessary tools both in the context of computational thinking and in the more social and emotional and critical thinking.

Will a new gap be created? In addition to digital natives and those who are not, will there be a new one among those who will know how to use this new technology and those who will not?

JMM: In the end, any new technology ends up causing this. I don't know if what we are living now will end up being significantly different from the conversation we could have had exactly 30 years ago with the advent of the internet.

NO: We usually talk about four industrial revolutions in the last 200 years: the steam engine; the emergence of electricity and mass manufacturing processes; the internet and personal computers (the first known as the information revolution), and the fourth, in which we are now immersed and which represents an unprecedented intimate union between the physical, biological and digital worlds. At the heart of this fourth industrial revolution are many disciplines, such as biotechnology, nanotechnology, genetic engineering and AI. We know that any industrial revolution has profoundly transformed all areas of society, including obviously the productive fabric and the labor market. And this we know is happening and will happen with AI.

Can we talk about a new Luddism?

NO: We can talk about a new social contract. There has always been Luddism, but I think it will be based more on the fact that we collectively decide in which technological development we want to invest. Not all technological development represents progress.

JMM: In any of the previous revolutions, the technology obviously started in the hands of a few and immediately became a product. But I'm not sure that the same thing will happen in this case. If you analyze the internet, electricity, the steam engine... after a few years it was a product, with a public, sometimes private owner, but it was a product. It is not obvious that AI will end up like this, and this implies social risks, control and the role of states in these ecosystems that I do not know how they will be discerned.

NO: Yes, we have a situation of brutal asymmetry. What is curious is that AI research is led by an oligopoly of technology companies. It is positive for any discipline that research depends on the economic interests of companies because there is not necessarily an alignment between these interests and the social good. This asymmetry is now part of the global distribution of power. This is why there are more than fifty AI strategies in most countries or supranational structures, such as the EU. And that is why China, which wants to be the number 1 power in the world, has the most ambitious AI strategy on the planet.

JMM: It's very interesting to discuss why AI is, quote, the first scientific field in recent history where research is led by the private world. First, because it is a discipline in which the immediate economic impact is brutal. And second, because we have private companies that understand how to retain great research talent, offering them environments with access to large infrastructures and unprecedented volumes of data and economic conditions that the public world cannot fight against.

And not only that. At the same time, they allow them to publish and explain what they do, so they can develop a traditional scientific career.

NO: Historically, within AI there have been two major schools: top-down, or symbolic logic, and bottom-up, or connectionist. The question is: why has the fourth revolution come now when AI has been around since the 1950s? Because three factors have converged that have driven the exponential development of bottom-up AI techniques, which are based on learning from data: the availability of large amounts of data, large computing capacities at low cost and the development of complex deep learning models. And who has massive amounts of data and computing? The big technology companies, which are also the richest in the world. They have so much money they can invest to attract the best minds to continue monetizing our data.