"That children know about AI is fine, but their level of math matters more"

The brilliant irruption of artificial intelligence (AI) has caught more than one with a changed pace.

Oliver Thansan
Oliver Thansan
10 June 2023 Saturday 10:23
4 Reads
"That children know about AI is fine, but their level of math matters more"

The brilliant irruption of artificial intelligence (AI) has caught more than one with a changed pace. Hence, a calm reflection on this new technology is so necessary, which is the protagonist, along with others, of the fourth industrial revolution. To this end, and within the framework of the Annual Meeting of the Cercle d'Economia, La Vanguardia brought together two specialists in the field: Nuria Oliver, a telecommunications engineer and director of the Ellis Alicante Foundation, and Josep Maria Martorell, deputy director of the Barcelona Supercomputing Center (BSC). What follows is the result of their face to face, moderated by Miquel Molina, deputy director of La Vanguardia.

Have you been surprised that generative AI has become a social phenomenon?

Nuria Oliver (NO): What has been surprising is the massive adoption of ChatGPT and its presence on the front pages of the media almost every day. ChatGPT highlights many of the ethical challenges and limitations of current AI systems, and has also made evident the issue of regulation, which has been underway in Europe since 2021.

Would it have been interesting to follow with AI the process that is applied to drugs, which go through regulatory agencies and obtain permits?

JMM: Here is always the question of how much you regulate and how much you allow innovation. First of all, we are talking about software, and when the object of the debate is software, the rules are different, it is something much more fluid. The challenge is to find the balance between not being irresponsible in opening everything up and not stopping innovation, without which progress will not be made.

NO: I agree, but it is also true that we are all part of this great social experiment that involves hundreds of millions of people without knowing what impact it may have. In the case of generative AI, we don't know what it's actually going to generate, because the universe of things it can create is virtually limitless, which makes regulation even more difficult. More than regulating the technology, what we should do is regulate the impact. We must abstract from the specific methods that are being used, but not accept a system that discriminates, that manipulates human behavior, that violates privacy, that uses data without consent... There are a series of basic principles that must be met regardless of what underlying technology you are using. But of course, how do you implement that? How do you define risk? Who determines it? How do you implement the regulation and see to it that it is being complied with? That's where the complexity really lies, and until the European regulation starts to work, it will be very difficult to anticipate anything.

JMM: There is an important question, and that is to know to what extent the very existence of a regulation, which we are still not clear about how it will be implemented, will not encourage the installation of certain initiatives in Europe. How easy it will be, overstated and oversimplified, for an AI entrepreneur to set up shop somewhere else with less regulation. This is what we won't know until we see how the European regulation is executed.

Would it be necessary to implement a subject in primary school to introduce this discipline to students?

JMM: In that I am quite skeptical. As an anecdote, the other day the heads of a school in Barcelona came to see me and asked me if I thought it would be interesting that the 5th and 6th graders were given a quantum computing module. The first thing you think is, “well, those in charge are very worried about what is going to happen to these kids in the future”; but the second thing that comes to mind is, "well, we're not going to exaggerate either." I tend to think that a return to basics is always a good idea. It's okay to worry that primary school children understand what AI is, but the first thing that should worry us is the level of mathematics they are going to have.

NO: I have been defending two elements regarding a possible reform of compulsory education for a long time. On the one hand, I do think it would be valuable to introduce a computational thinking core subject, for which it is not necessary to use computers.

JMM: That's something else, yes.

NO: Computational thinking involves the development of five skills, which are very important and useful in life: the first is algorithmic thinking, that is, teaching to solve complex problems by breaking them down into steps and modules that follow one another over time. ; the second is programming, because in the end it is the language of technology; the third is the data. There is a great ignorance about all the issues related to data, including how those that generate it are being used, using the networks, children and adolescents; the fourth is networks, because we live in a world of networks; and the fifth and last is the hardware, which would be a minimum knowledge of all the technological devices we use. This would be a bit what. But it is also important to reinforce skills that we know have been key to the survival of Homo sapiens and that perhaps we are not developing enough, such as critical thinking, creativity and all the skills of social and emotional intelligence. For me, these two areas are important and I have my doubts that the children and adolescents of the 21st century have the necessary tools both in the context of computational thinking and in the more social and emotional and critical thinking.

Will a new gap be created? In addition to the digital natives and those who are not, will there be a new one among those who will know how to use this new technology and those who do not?

JMM: In the end, any new technology ends up causing that. I don't know if what we are experiencing now will end up being relevantly different from the conversation we could have had exactly 30 years ago with the advent of the internet.

NO: Normally we talk about four industrial revolutions in the last 200 years: the steam engine; the advent of electricity and mass manufacturing processes; Internet and personal computers (the first known as the information revolution); and the fourth and last, which is the one in which we are now immersed and which represents an unprecedented intimate union between the physical, biological and digital world. At the heart of this fourth industrial revolution are many disciplines, such as biotechnology, nanotechnology, genetic engineering and AI. We are aware that any industrial revolution has profoundly transformed all areas of society, obviously including the productive fabric and the labor market. And that we know what is happening and what is going to happen with the AI.

Can we talk about a new Luddism?

NO: We can talk about a new social contract. Luddism has always existed, but I think it will be based more on collectively deciding what technological development we want to invest in. Not all technological development represents progress.

JMM: In any of the previous revolutions, obviously the technology started in the hands of a few and immediately became a product. But I'm not sure that in this case the same thing will end up happening. If you look at the internet, electricity, the steam engine… after a few years it was a product, with a public, sometimes private owner, but it was a product. It is not obvious that AI ends up being like this, and that implies risks at the social level, of control and of the role of the states in these ecosystems that I do not know how they are going to discern.

NO: Yes, we have a situation of brutal asymmetry. The curious thing is that AI research is led by an oligopoly of technology companies. It is not positive for any discipline that research depends on the economic interests of companies because there is not necessarily an alignment between those interests and the social good. That asymmetry right now is part of the distribution of world power. That is why there are more than fifty AI strategies in most countries or supranational structures, such as the EU. And that's why China, which wants to be the number 1 power in the world, has the most ambitious AI strategy on the planet.

JMM: The discussion of why AI is, in quotes, the first scientific field in recent history where the leadership of the research is carried out by the private world is very interesting. First, because it is a discipline where the immediate economic impact is brutal. And second, because we have private companies understanding how to retain great research talent, offering them environments with access to large infrastructures and unprecedented volumes of data and economic conditions that the public world cannot fight against. And not only that. At the same time, they allow them to publish and explain what they do, being able to develop a traditional scientific career.

NO: Historically, within AI there have been two great schools: the top-down, or symbolic logic, and the bottom-up, or connectionist. The question is: why has the fourth revolution come now when AI has been around since the 1950s? Because three factors have come together that have driven the exponential development of bottom-up AI techniques, which are based on learning from data: the availability of large amounts of data, large computing capacities at low cost, and the development of complex deep learning models. The question is: who has massive amounts of data and computing? The big technology companies, which are also the richest in the world. They have so much money that they can invest in attracting the best minds to continue monetizing our data.