Should we be afraid of AI? The risks that come

Fear is an emotion that is invoked when we feel in danger, or think we might be.

Oliver Thansan
Oliver Thansan
13 March 2024 Wednesday 10:25
7 Reads
Should we be afraid of AI? The risks that come

Fear is an emotion that is invoked when we feel in danger, or think we might be. It has a clear value for survival, since it makes us alert – although it can sometimes block us – and prepares us to act, either by escaping or confronting the cause of our fear. The difference with other living beings, which can also feel fear, is that ours can have an imagined cause, whether it has a basis or not.

“Should we be afraid of artificial intelligence?” I have been asked. There are those who say they feel it and it is evident that there are those who want to put that fear into our bodies. However, I will ask myself another question, more appropriate for the answers I want to give and that seem more useful to the reader: What are the risks of AI? I do not intend to give an exhaustive answer, something impossible on the other hand, but I do intend to give you an idea of ​​what we are facing.

Since fear and risk perception are very personal, I have asked my students what they think we have to worry about in the face of the AI ​​boom. I teach them Artificial Intelligence in the Computer Engineering degree, so information technologies and, in particular, intelligent technologies, are part of their daily lives and will soon be their main tools or the very objective of their profession. .

The answers they gave me do not differ from those that abound in the media and even in the specialized media. What seems most worrying about AI are the biases, the possible lack of privacy, the errors made by machines, sometimes in bulk, security problems and a few other things. Issues that, in general, are not specific to AI, although it may accentuate them. In any case, the way to prevent and mitigate them is through success in the design and use of AI-based systems.

Biases are one of the main concerns people have when we talk about AI. Those of the machines are generally inherited from ours, through the data with which we feed the machine learning algorithms. Facial recognition systems that work worse with dark-skinned individuals and worker recruitment programs that discriminate against women are two examples of clearly negative biases. Recently, the Bloomberg company discovered that images created by an application based on generative AI identified most of the professions with men and the best-paid ones with people with light skin tones.

Safeguarding the privacy of our personal data seems to be increasingly difficult. As AI advances, the more capacity it has to be omnipresent and control everything, like Big Brother in the book 1984, written by George Orwell. Privacy, or the lack thereof, is present in almost every conversation about the main risks of AI. After all, monitoring ourselves is very easy with the technology already available. What's more, we are the ones who constantly give away our data, whether we do it consciously or not.

There are those who think that a good part of the solutions to these and other potential AI problems will involve implementing solid ethical principles around the development, commercialization and use of AI, but this is not the case. Having an ethical framework for AI is of interest and can be very useful, of course, but it will be above all as a anticipation, guide and complement to any legislative development, never as a substitute. If something is really important, it should be regulated and not left to the voluntary compliance of each person.

Let's think about the dilemmas posed by autonomous driving, for example. Apologizing in advance for the cruelty of the example, let's think about the following dilemma in the face of an imminent traffic accident: a) the car continues straight ahead and runs over a person who suddenly crosses us on the road; b) turn left and collide with a car traveling in the opposite direction to us, or c) turn right and crash into a wall. Making a decision in these circumstances is not something a driver can consciously do in tenths of a second, but an autonomous car can. In fact, it can do it in milliseconds and with much more information than we can. Should the decision be left to the ethics of the manufacturer or the insurance company? Perhaps at the discretion of each driver, a decision that will have been set when purchasing the car, with the option of changing your mind later, in any case? Should chance guide the car in these types of dilemmas? It will undoubtedly be very difficult and controversial to regulate these things. However, although laws can always be perfected, not having them is usually one of the worst decisions, especially when human lives can be lost or ruined.

I will continue using the example of self-driving cars to talk about other issues that also concern us: the infallibility and security of AI-based systems. Let's go by parts. Intelligence and infallibility are not compatible. We can guarantee a correct result when solving operations with matrices, no matter how large or complex they may be, but we cannot guarantee that an autonomous car does not have accidents, or even that it does not sometimes cause them as a consequence of its errors in decision-making. The increase in intelligence in machines, necessary to solve increasingly complex problems, is incompatible with infallibility, as occurs in people, on the other hand. Surely for this reason, every time an autonomous car has a serious accident, particularly when people die or are seriously injured, news appears in the media around the world calling not only for extreme precautions when manufacturing and using these vehicles, but also sensible, but questioning the very development of this type of technology. This approach is paradoxical when over a million people die every year in the world due to traffic accidents, the vast majority of which are caused by human errors. We must keep in mind that autonomous vehicles are not simply a way to save costs, but to save lives.

Nor can we guarantee the full security of what we are leaving in the hands of AI. Not only because there is nothing certain in our lives, except for the fact that, for the moment and at some point, we are all going to lose it, but because the capabilities of AI themselves amplify the weakness and vulnerability of what until now we took for granted. . A car connected to the internet can be hacked, taking control of it. Medical images can be altered, generating false positives or false negatives, and it will be difficult for medical specialists to detect it in that case. But the possibility of failures and attacks should not be a cause for alarm, although we must take care to avoid them, of course.

What surprised me most is that my students barely spoke out on issues that seem especially worrying to me, such as socioeconomic inequalities or technological unemployment, which are amplified by the growing presence of AI in our lives and in our tasks. . Perhaps my students do not perceive the problem of technological unemployment because few are working or because they consider that professionally they can be active, although involuntary, agents of the acceleration of technological unemployment, but never those who suffer its consequences. Big mistake. No occupation, not even yours, will be in a few years as it is today, let alone throughout your working life. Few will disappear, contrary to what is often said, but almost all will be transformed to a greater or lesser extent, and many of the tasks associated with them will be automated. This will mean the disappearance of many jobs, and although more will probably be created than disappear, as almost all studies and reports tell us, the new jobs will not be filled by those who lose theirs due to the automation of work, since that both demand very different professional profiles.

My students also did not notice the inequalities that AI is causing, and that will become increasingly greater if action is not taken to prevent it. In fact, the increase in socioeconomic inequalities that we experience in the most developed countries now seems to have taken a backseat, not just to action, for which there was never enough effort, but to discourse. Thomas Piketty's book: Capital in the 21st Century, sounded like a knock when it was published, but it seems that, after the surprise of the moment, the waters of hypercapitalism, to use an expression by Piketty himself used in another of his books, Capitalism and ideology have returned to normal.

Technologies have always been, are and will be spontaneous amplifiers of social and economic inequalities. Those who control them, by creating them and/or using them for their benefit, increase their privileges or advantages over those who do not own them or simply use them under the conditions imposed by them. It happens with weapons, of course, but also with the production and distribution of energy and other resources necessary for the survival and development of countries and people. Also with the media, with education and health, with the production and marketing of goods and services... The more powerful and versatile a technology (or set of them) is, the more it tends to amplify the differences between its owners or producers and its consumers, and smart technologies are the most transformative, even disruptive, of those we have created so far. Therefore, more than ever, my concern is not that we will become subjugated by intelligent machines but by those who own them.

There are people who, however, are already very concerned about a world in which robots, or machines in general, dominate us and subject us to their whim, or even eliminate us. I suppose it would be something similar to what happens in Planet of the Apes, but exchanging gorillas and chimpanzees for robots. Although I am one of those who think that there may come a time when we will also have to worry about these things, we now have much more serious, real and current problems. For example, all the UN Sustainable Development Goals (SDGs) are first-order problems, so let's concentrate collective efforts there and let some people reflect or investigate how to maintain control when machines have a general purpose AI similar to or even superior to ours. By the way, AI can have a positive impact on all the SDGs, as some reports have shown, so let's get to work

A very real problem, on the other hand, is falling into over-reliance on AI, which could leave our society vulnerable in the event of systemic failures. There is also evidence that information technologies, and AI in particular, are affecting how we relate to the world and each other and increasing our dependence on machines by delegating an increasing number of responsibilities to them. This is weakening our own capabilities. This is what happened with part of our physical and sensorimotor skills, to the extent that we have been developing machines that do a good part of the physical work for us, which was previously done by people. But now it is also happening with work that requires cognitive abilities, even those of a higher level. Therefore, just as we are concerned about the proven loss of competence of aircraft pilots to deal with critical situations, we must be concerned that our memory and attention span may be weakened. Also reading comprehension or correctness and fluency in writing. Even creativity can be impaired by the intensive and sometimes abusive use of machines, even more so when generative AI is creative. In fact, I have a conjecture, which I still cannot state as a principle, that intelligence in the world remains more or less constant, but the artificial intelligence increases and ours decreases. I do not believe that this is the desired future, but it is in our hands, especially those who design education policies or those of us who are educators, to ensure that both are increased.

I do not want to fail to mention the need to advance a legislative framework that regulates the uses of AI, protecting people above all and thinking about the common good. To those who fear that laws could be a brake on advances in AI and innovation based on it, I would ask if they would take a medicine that had not been subjected to exhaustive controls from its design to its sale in the pharmacy. Well, the impact that AI can have on our lives, which it is already having, in fact, can be in some cases as great as that of medicines. Furthermore, just as in exceptional circumstances, as happened with the Covid vaccine, we can accelerate the pace and make procedures more flexible as they affect our health, we can also do so with AI when the case requires it. For example, facial recognition will be banned in the European Union once the AI ​​regulation, the world's first general law to regulate AI, is approved very soon. This prohibition, however, may have its exceptions when circumstances justify it and judges authorize it. Welcome to this AI regulation, since with it our fears will be fewer and have less foundation.

Regulating and even prohibiting depending on what uses and under what circumstances is the best way, perhaps the only way, to combat the unwanted use of any technology, including smart technologies. It is clear that the approval of laws that regulate the use of AI does not simply guarantee their compliance, but that is how we operate in democratic countries – here we do not cut off the hands of those who violate the laws, but we do try to clip their wings. Given the irregularities, non-compliance, legal loopholes... and the advances in the development and application of intelligent technologies, we must ask ourselves every day what more we should and can do, and do it.

I would not like this article to be only descriptive, although description is essential, but also prescriptive. That is why I want to conclude by insisting on three key points so that we do not have to be afraid of AI: 1. teach well the professionals who must design and apply intelligent technologies, and also inform and educate society as a whole; 2. carry out public policies to prevent and address inequalities and technological unemployment, and 3. legislate the uses of AI, seeking the common good. If we have to be afraid of something, it is not doing things well.

Senén Barro Ameneiro is director of the CiTIUS (Singular Research Center for Intelligent Technologies) at the University of Santiago de Compostela.