AI experts and executives ask to pause its development due to the dangers it can generate

Artificial intelligence experts and industry executives, including Elon Musk, have called for an "immediate" six-month pause on the advancement of AI systems more powerful than GPT-4 due to possible risks to society and humanity, according to have signed on to an open letter.

Oliver Thansan
Oliver Thansan
29 March 2023 Wednesday 01:24
32 Reads
AI experts and executives ask to pause its development due to the dangers it can generate

Artificial intelligence experts and industry executives, including Elon Musk, have called for an "immediate" six-month pause on the advancement of AI systems more powerful than GPT-4 due to possible risks to society and humanity, according to have signed on to an open letter. Among the harmful effects is the spread of misinformation, the destruction of jobs and even the risk of "loss of control of civilization."

"Powerful artificial intelligence systems should only be developed once we are sure that their effects will be positive and their risks are manageable," the letter reads. It details potential risks to society and civilization from AI systems, such as economic and political disruption, and asks developers to work with lawmakers and regulators on governance.

"AI systems with intelligence that competes with humans can pose profound risks, as shown by extensive research and recognized by leading AI labs," it said. Their impact is neither monitored nor managed correctly, it is warned. "Decisions cannot be delegated to technology managers that we have not chosen."

The letter, issued by the non-profit Future of Life Institute, which focuses on keeping technological transformations harmless, calls for a pause on advanced AI development until independent experts develop, implement and audit shared security protocols for such designs. It is also called to establish regulatory bodies, audits, certification systems and the taking of responsibilities for damages caused by AI. The door is even opened to government intervention to stop developments if there is no common will.

Among the main signatories, more than 1,000, are Elon Musk, the CEO of Stability AI, Emad Mostaque, researchers from DeepMind -owned by Alphabet-, the co-founder of Apple, Steve Wozniak, Professor Yuval Noah Harari and industry leaders. like Yoshua Bengio or Stuart Russell.

Since its launch last year, Microsoft-backed OpenAI's ChatGPT has prompted rivals to accelerate the development of similar language models and companies to integrate generative AI models into their products. Sam Altman, the head of OpenAI, is not among the signatories.

"They can do serious harm...Big players are becoming increasingly secretive about what they're doing, making it harder for society to defend against any harm that may materialize," said Gary Marcus, a professor emeritus at the University of New York and signatory in statements to Reuters. It is a "closed and out of control" race to drive artificial intelligences that "no one, not even their creators, can reliably understand, predict or control," the letter states. "The letter is not perfect, but the spirit is right: we need to slow down until we better understand the ramifications," added Marcus.

The statement comes days after Europol raised ethical and legal concerns about advanced AIs like ChatGPT, warning of the system's potential misuse in phishing attempts, disinformation and cybercrime. Musk, the world's wealthiest and whose automaker Tesla is using AI for an autopilot system, has raised concerns about the technology. "This does not mean a pause for development in general, just a step back from the perilous race towards (tools) with ever-greater and unpredictable emerging capabilities," the letter detailed.