"Artificial intelligence could be a risk to the existence of humanity"

The philosopher Nick Bostrom (Helsingborg, Sweden, 1973) has made looking to the future his profession.

Oliver Thansan
Oliver Thansan
15 July 2023 Saturday 10:22
8 Reads
"Artificial intelligence could be a risk to the existence of humanity"

The philosopher Nick Bostrom (Helsingborg, Sweden, 1973) has made looking to the future his profession. Founder and director of the Institute for the future of humanity at the University of Oxford, Bostrom is one of the exponents of transhumanism, a school of thought that believes that the application of technology will give rise to post-human beings that will have overcome many of our limitations. , maybe even death. But that does not mean that he is optimistic, since he has spent years warning of the dangers of technological advances, starting with artificial intelligence (AI) that "could even be a risk to the existence of humanity." In any case, in his opinion, "AI will force us to revise ideas such as identity, time, democracy or death." That, if all goes well. And we better all go well.

Bostrom used to illustrate his ideas about the risks of technology using the metaphor of the white and black balls. The history of humanity has consisted of constantly extracting from an urn some balls that are technological advances: many are white -they represent an extraordinary benefit- and others, gray -produce an ambivalent result. But no black has yet emerged, that is, a technology that would imply the end of the species. For example, "with nuclear weapons we have been lucky," he explains, "because raw materials are needed that are not easy to obtain, but if a thermonuclear detonation had been possible using, say, a microwave oven, then perhaps it would have been possible." brought about the end of civilization."

Do you think artificial intelligence could be one of those black balls?

Potentially yes. However, it differs from many others in that, at the same time that it is dangerous, it could also be a solution to problems caused by other technologies. It is true that AI itself could be a great risk, I even think an existential risk - and I am referring specifically to human level intelligence or super intelligence - but if we manage to control it and survive the transition to the age of super intelligence of machines, then it can become a tool to ward off other dangers to human existence. We could then face the risks of highly advanced synthetic biology or nanotechnology, or who knows what else. But if it is the other way around, if we develop these technologies first and then AI to its full potential, the danger may arise from the sum of risks.

Are you pessimistic or optimistic?

Both. In the case of AI, both the advantages and disadvantages are very great. I'm moderately fatalistic in the sense that I think there's a level of difficulty within this technology that we don't quite understand yet. The AI ​​might be so difficult to control that we would fail no matter how hard we tried, or it might instead be relatively easy to steer, or it might fall somewhere in the middle. We still don't know where we are.

Are we still in time to think about where we are going with AI or is its development so fast that it is already too late for these reflections?

We have time, but not as much as if we had started five years ago. There is now something of a race to research security as we develop ever more capable systems. And with the pace of progress so rapid, there's a sense of urgency to figure out how to achieve AI alignment, that is, how to build or train these systems to actually do what their creators intended them to do. Do according to your values ​​and goals.

Will human intelligence and artificial intelligence at its level coexist in the future? Will there be coexistence or competition?

If things go well, in the future there will be both biological and artificial intelligences, but I believe that the time during which they will be more or less comparable will be short and that it will not be long before super-intelligent AIs appear that radically surpass us in every way. . Many people have a hard time imagining that a machine can do more or less the same tasks as us and just as well, and so they don't take the obvious next step: if we get to that level, what will happen six months later, or a year later? later, or two years later? The development of AI will not stop there.

At that time the same thing that happened with computers and chess will happen, that is, there was a time when there were computers more or less as good as Garri Kasparov and each one could win a few games. But now computers simply completely dominate any human chess player.

Now everything around AI is accelerating.

At least it's moving fast. It's a bit hard to gauge if it's accelerating or just continuing to move quickly.

But perhaps with ChatGPT or other similar systems what has accelerated has been the social and media debate on artificial intelligence.

Yes, without a doubt the debate has moved to the street, because many people have tried ChatGPT and have a more immediate idea than if they had simply read an article. If you can really interact with these systems yourself, your perception is different, more visceral. And the perception, logically, is that this ability to have intelligent conversations with AI is new.

Some wonder if these chats have something similar to a conscience.

We really don't know. Firstly, because the criteria for a system to be conscious are not clear, although philosophers have debated for a long time they have not yet reached a consensus. On the other hand, we also do not understand very well what exactly happens inside these systems in computational terms. We know the training algorithm that is used to fit hundreds of billions of parameters. As a result, very complex processes are developed that allow the good performance of these machines that we are currently seeing, but it is not yet known exactly what those internal processes are. There's a lot of uncertainty about a hypothetical awareness of the machines, but I think even if they don't, we're probably on a path where this becomes an increasingly strong possibility.

It's amazing that humans create technology but in the end can't understand how it works.

Yes, it's comparable to how we know how to make babies, but we don't understand much of the biological process. Until a few hundred years ago we had zero understanding of it, and now we understand things like DNA and biochemistry, but our understanding is still very limited.

When there is an AI comparable to our intelligence, how will our values ​​and priorities change?

In a future society in which there are not only non-rational humans and animals, but also digital minds, some of which may be conscious or have moral status, many of the moral intuitions we currently use to regulate our society, norms and laws they will have to change to accommodate the quite different nature of artificial intelligences. For example, humans think that the freedom of reproduction is an important value, people must decide for themselves if they want to have children and how many. But digital minds will be able to copy very easily and quickly. If one day a digital mind has the same status as a human, and can have, say, a million children or replicas in 20 minutes, it will have to be regulated. Because then, among many other things, the meaning of 'one person, one vote' will change. In a democracy that's an important principle, but if you could make 1,000 copies of yourself the day before the election and then merge them all the day after, it would no longer make sense for each of those copies to have a vote because that would incentivize people. to try to cheat the system or it would give more power in proportion to your ability to buy computing power.

But many other things would have to be reconsidered, such as death, a concept that for a human is something permanent, all or nothing. Maybe if you're religious, you think there's an afterlife, but at least as far as your physical presence on Earth is concerned, you die and that's all. A digital mind can pause and restart later, slow down or speed up depending on how fast its processors are. An hour will be, even more than today, a concept that will be experienced subjectively. And will a faster mind have the same moral value as a slow one or will it be different?

AI may therefore force us to review concepts such as identity, democracy, death or time, among many others.

AI is developing too quickly for the speed at which humans can assimilate it. Will this be a problem?

The speed makes it difficult for us to face this evolution, of course. I think eventually we will have to rely more and more on AI to help us be aware of our surroundings and let it act on our behalf. Just like children need parents or guardians who defend their interests and look after them, or just like older people who suffer from dementia and need other people to look after their interests. As we move into this era, I believe that we will all rely more and more on advisors and guardians or artificial intelligence agents to watch over us. If there is a broad ecology of different AIs, some of which serve other perhaps adverse interests, we would need our own to mediate the others.

How does our society imagine in a few decades in relation to artificial intelligence?

We will reach a point where many of our basic values ​​will have to be reconsidered as the limitations that we currently have and that have shaped our societies will be removed thanks to much more advanced technology. It may be that this technology represents great economic abundance for us and that robots do much of the work that we do today. So we will have to see how we will have to change our culture and the educational system and, in general, the foundation on which we build our sense of self-esteem and dignity. And I think even that is just the first step.

On another level, there are even deeper questions that arise as those limitations you spoke of are removed. We will ask ourselves, for example, what it is that ultimately gives value to life and we will focus on those things instead of those that currently fill our time and that we simply do for instrumental reasons. In other words, you have to earn a living, you have to brush your teeth because if they don't deteriorate... many of these things will disappear and we will return to the origin, to reflect on the nature of our deepest values.