Risk of an abrupt collapse of reality

A few weeks ago Sam Altman, creator of ChatGPT, asked the US Senate to regulate the uses of artificial intelligence and prior authorizations to develop large-scale language models (LLM in English).

Oliver Thansan
Oliver Thansan
23 June 2023 Friday 04:46
8 Reads
Risk of an abrupt collapse of reality

A few weeks ago Sam Altman, creator of ChatGPT, asked the US Senate to regulate the uses of artificial intelligence and prior authorizations to develop large-scale language models (LLM in English). These pro-regulation statements, unexpected coming from the private sector, have generated great controversy among investors and technologists. Faced with the many who are opposed to any kind of regulation, numerous experts (Elon Musk or Steve Wozniak, among them) believe that it is urgent to put a limit on AI. More than 50% of the researchers of this technology believe that the impossibility of controlling it entails a 10% or more chance of ending our civilization. And while there are millions of potential applications that will advance efficiency, knowledge and predictability, a few thousand may pose a real risk to our survival.

The opportunities brought by AI make it possible to look with optimism at the fight against diseases, climate change and poverty; get to know nature and the cosmos, physics and chemistry better and face many of the great social and economic challenges in a solvent way. We are facing one of the most important inventions in history. The challenges, however, are also formidable. They range from worrying disruptions in the labor market or the automated surveillance of the population (which already occurs in totalitarian countries) to the malicious use of AI for the creation of diseases, lethal chemicals or nuclear weapons. The pinnacle of threats is artificial general intelligence (AGI), capable of autonomously performing any intellectual task, learning at an accelerated pace, and therefore capable of supplanting us as superior intelligence.

Beyond apocalyptic threats, one of the closest risks is the impact on our perception of reality. This is not fake news, which has always existed (in the past it was created by unethical media; today, with social networks, anyone can spread it). What comes now is different. AI can copy identities or create fake ones (fake people), generate messages, sounds, images and perceptions as real as those of any living being or natural phenomenon. Exact voices that loved ones will call us, to ask us for financial aid or personal data; millions of fake videos of politicians or celebrities doing illegal or derogatory acts will appear on the networks, which will influence electoral processes or allow blackmail. The evidentiary element necessary to administer justice will become impossible.

AI will create mental frameworks to establish prejudices, phobias and enthusiasms among the population, which will allow to persuade, deceive and defame. Without human control and with all the information of the citizens, it will have the capacity to found cultural movements, religions... Polarization in democracies will rise to infinity. We will talk to machines. And everything will be exponential, because LLMs learn, creating ever greater intelligence capabilities without human intervention. Knowing what is true and what is not will be one of the biggest challenges. The European Commission's recommendation to the large technology groups to commit to labeling content generated by AI is a good step, but not enough. Political decision-makers must join the technologists and establish strict authorization and control processes to prevent the negative consequences of this impressive technological advance.