AI experts equate it with nuclear war

A large group of those involved and promoters of the artificial intelligence industry launched an SOS letter to humanity yesterday because of the danger they see in it.

Oliver Thansan
Oliver Thansan
31 May 2023 Wednesday 05:10
4 Reads
AI experts equate it with nuclear war

A large group of those involved and promoters of the artificial intelligence industry launched an SOS letter to humanity yesterday because of the danger they see in it.

"Mitigating the risk of extinction posed by AI must be a global priority along with other risks on a global social scale such as pandemics and nuclear war." This is one of the paragraphs that appear in a statement in missive format released by the AI ​​Security Center, a non-profit organization.

Among the 350 CEOs and researchers of artificial intelligence that appear in the list of signatories, there are relevant personalities who play a first-person role in the development of this tool, such as Sam Altman, chief executive of OpenAI, signing that develops ChatGPT; Demia Hassabis, head of Google DeepMind, or Dario Amodei, head of the Anthropic company.

Yoshua Bengio, professor of computer science at the University of Montreal, and Dr. Geoffrey Hinton, who had previously given notice, are also among the signatories. They are two of the three researchers who in 2018 won the prestigious Turing Award for their pioneering work in neural networks, which is why they are described as the “godfathers of the AI ​​movement”.

However, the third winner of this recognition, Professor Yann LeCun, another of the sponsors and professor at New York University, disagreed with the alarmism of the letter. LeCun, who also works with Meta (Facebook's parent company), considered that these apocalyptic fears "are exaggerated".

"Prophecies of doom are slapped in the face," he tweeted. Other experts agreed that the fears are unrealistic and a distraction from issues such as biases in the systems.

But the statement from a very influential part of the sector materializes at a time when concern is growing about the potential harms of artificial intelligence. Recent advances in large language models, the kind used by ChatGPT and other chatbots, have spread the idea that AI could soon be used to spread disinformation and propaganda, and threaten to eliminate millions of employees from white collar or salaried jobs with a good level of education. In a few years, AI could cause serious social disruption.