"Artificial intelligence could be a risk to the existence of humanity"

The philosopher Nick Bostrom (Helsingborg, Sweden, 1973) has made looking into the future his profession.

Oliver Thansan
Oliver Thansan
15 July 2023 Saturday 11:03
7 Reads
"Artificial intelligence could be a risk to the existence of humanity"

The philosopher Nick Bostrom (Helsingborg, Sweden, 1973) has made looking into the future his profession. Founder and director of the Institute for the Future of Humanity at the University of Oxford, Bostrom is one of the exponents of transhumanism, a current of thought that believes that the application of technology will give rise to posthuman beings who will have overcome many of our limitations, perhaps even death. But this does not mean that he is naively optimistic, since for years he has been warning of the dangers posed by technological advances, starting with artificial intelligence (AI), which "could be a risk even for the existence of the humanity". In any case, in his opinion, "AI will force us to review ideas such as identity, time, democracy or death". That's if everything goes well. And it's better for us that everything goes well.

Bostrom often illustrates his ideas about the risks of technology by resorting to the metaphor of white balls and black balls. The history of humanity has consisted of constantly extracting, from an urn, some balls that are technological advances: many are white - they represent an extraordinary benefit -, and others are gray - they give an ambivalent result -. But none of it has come out of black, that is to say, a technology that would inevitably imply the end of the human species.

Could artificial intelligence be a black ball?

Potentially yes. However, it differs from many others in the fact that, at the same time as it carries danger, it could also be a solution to problems caused by other technologies. It is true that AI itself could be a great risk, even an existential risk – and I am specifically referring to human-level intelligence or superintelligence – but if we control it and survive the transition to 'era of machine superintelligence, then it can become a tool to ward off other dangers to human existence. We could face the danger of very advanced synthetic biology or nanotechnology, or who knows what else. But if it's the other way around, if we first develop these other technologies and then AI to its full potential, the danger may be born from the sum of all the other risks.

Are you pessimistic or optimistic?

Both things. In the case of AI, both the advantages and disadvantages are very large. I'm moderately fatalistic because I think there's a level of difficulty within this technology that we don't quite understand yet. The AI ​​could be so difficult to control that we would fail no matter how hard we tried, or it could instead be relatively easy to guide, or it could fall somewhere in between. We still don't know where we are.

In the future will human intelligence and artificial intelligence coexist at their level? Will they coexist or compete?

If things go well, in the future there will be both biological and artificial intelligences, but I think the time during which they will be more or less comparable will be short and it won't be long before superintelligent AIs appear to surpass us radically in every sense. Many people find it hard to imagine that a machine could do more or less the same tasks as us and just as well, so they don't take the next obvious step: if we get to that level, what about six months, a year or two years later? The development of AI will not stop then.

With ChatGPT or similar systems, the social and media debate on artificial intelligence has accelerated.

Yes, the debate has certainly moved to the streets, because many people have tried ChatGPT and have a more immediate idea than if they had read an article about it. If you can actually interact with these systems for yourself, your perception is different, more visceral.

Some wonder if these chats have anything like a consciousness.

We really don't know. First, because the criteria for a system to be conscious are not clear; although philosophers have long debated it, they have yet to reach a consensus. On the other hand, we also do not understand very well what exactly happens inside these systems in computational terms. We know the training algorithm that is used to adjust hundreds of billions of parameters. As a result, very complex processes are developed that allow the good results of these machines that we see today, but it is not yet known exactly what these internal processes are.

There is great uncertainty about a hypothetical machine consciousness, but I think that even if they don't have it, we're likely on a path where this will become an increasingly solid possibility.

When there is an AI comparable to our intelligence, how will our values ​​and priorities change?

In a future society in which there are not only non-rational humans and animals, but also digital minds, many of the moral intuitions we currently use to regulate our society, the rules and laws, will have to change to adapt to the rather different nature of artificial intelligences.

For example, we humans think that reproductive freedom is an important value, people should decide for themselves if they want to have children and how many. But digital minds will be able to copy themselves very quickly. If one day a digital mind has the same status as a human, and can have, for example, a million children or replicas in 20 minutes, it will need to be regulated. Because then, among many other things, the meaning of 'one person, one vote' will change. In democracy this principle is important; if you could make a thousand copies of yourself the day before the election and then merge them all the next day, it would no longer make sense for each of those copies to have a vote.

But many other things would have to be reconsidered, such as death, a concept that for a human is something permanent, all or nothing. Maybe if you're religious you think there's life after death, but at least as far as your physical presence on Earth is concerned, you die and that's it. A digital mind can be interrupted and restarted later, slowed down or sped up depending on how fast its processors are. One hour will be, even more than today, a concept that will be experienced subjectively.

AI may therefore force us to review concepts such as identity, democracy, death or time, among many others.

AI is developing very quickly considering the speed at which humans can assimilate it. Will this be a problem?

The speed makes it difficult for us to face this evolution, no doubt. I think that in the end we will have to rely more and more on artificial intelligence to help us be aware of our environment and let it act on our behalf. Just as children need parents or guardians to advocate for their interests and look out for them, or just as elderly people suffering from dementia need other people to look out for their interests.

How does our society imagine a few decades from now in relation to artificial intelligence?

We will reach a point where many of our core values ​​will have to be rethought as the limitations we currently have that have shaped our societies will be removed by much more advanced technology. It may be that this technology represents for us the achievement of great economic abundance and that robots do a lot of work that we do today. Therefore, it will be necessary to see how we will have to change our culture and educational system and, in general, the basis on which we affirm our sense of self-esteem and dignity.