Risk of abrupt collapse of reality

A few weeks ago, Sam Altman, creator of ChatGPT, asked the US Senate to regulate the use of artificial intelligence and prior authorization to develop large-scale language models (LLM).

Oliver Thansan
Oliver Thansan
23 June 2023 Friday 04:22
3 Reads
Risk of abrupt collapse of reality

A few weeks ago, Sam Altman, creator of ChatGPT, asked the US Senate to regulate the use of artificial intelligence and prior authorization to develop large-scale language models (LLM). These pro-regulatory statements, unexpected coming from the private sector, have generated great controversy among investors and technologists. Faced with the many who are against it, many experts (Elon Musk and Steve Wozniak, among them) believe that it is urgent to put a stop to AI. More than 50% of the researchers of this technology believe that the impossibility of controlling it carries a 10% or more chance of ending our civilization. And, although there are millions of potential applications that will allow us to advance in efficiency, knowledge and predictability, a few thousand can pose a real risk to our survival.

The opportunities that AI brings allow us to look optimistically at the fight against disease, climate change and poverty; better understand nature and the cosmos, physics and chemistry and solve many major social and economic challenges. We are facing one of the greatest inventions in history. The challenges, however, are also formidable. They range from worrying disruptions in the labor market or automated surveillance of the population (it already happens in totalitarian countries) to the malicious use of AI to create diseases, lethal chemicals or nuclear weapons. The ultimate threat is artificial general intelligence (AGI), capable of autonomously performing any intellectual task, learning rapidly and, therefore, capable of supplanting us as superior intelligence.

Beyond the apocalyptic threats, one of the closest risks is the impact on the perception of reality. This is not fake news, which has always existed (before, it was created by unethical media; today, with social networks, anyone can spread it). What is coming now is different. AI can copy identities or create false ones (fake people), generate messages, sounds, images and perceptions as real as those of any living being or natural phenomenon. Exact voices to those of loved ones will call us, to ask for financial help or information; Millions of false videos of politicians or celebrities doing illegal or denigrating acts will appear on the networks, which will influence electoral processes or allow blackmail. The necessary evidence to impart justice will become impossible.

The AI ​​will create mental frameworks to settle prejudices, phobias and enthusiasms among the population, which will allow to persuade, deceive and defame. Without human control and with all the citizen information, it will have the capacity to found cultural movements, religions... The polarization in democracies will rise to infinity. We will talk to machines. And everything will be exponential, because LLMs learn. Knowing what is true and what is not will be one of the great challenges. The European Commission's recommendation to large technology groups that they commit to labeling AI-generated content is a good step, but insufficient. Political decision makers must unite with the technological ones and establish strict authorization and control processes to prevent the negative consequences of this impressive technological advance.