The OpenAI manual to avoid the apocalypse of humanity at the hands of AI

Despite being the main drivers of the rapid development of artificial intelligence in the last year, OpenAI wants to ensure that this technology does not become a danger to humanity.

Oliver Thansan
Oliver Thansan
20 December 2023 Wednesday 09:22
5 Reads
The OpenAI manual to avoid the apocalypse of humanity at the hands of AI

Despite being the main drivers of the rapid development of artificial intelligence in the last year, OpenAI wants to ensure that this technology does not become a danger to humanity. That is why he has created a team specifically designed to keep a hypothetical 'machine rebellion' under control. The first step of this prevention strategy is already public: it is a document called 'Preparation Framework', dedicated to explaining how to avoid a hypothetical end of the world that would have machines as protagonists, in the purest 'Terminator' style.

The report consists of 27 pages, in which the company that created ChatGPT has included the potential threats that may arise from its products. It includes cybersecurity risks and much more serious dangers, such as the possibility that its robots could be used to create nuclear or biological weapons.

The research highlights that “the central thesis underlying the Preparedness Framework is that a robust approach to security against catastrophic AI risks requires proactive, science-based determinations of when and how it is safe to proceed with development and deployment.”

OpenAI has created a system to measure and record the dangerousness of its models in a series of risk categories, among which cybersecurity stands out; chemical, biological, nuclear and radiological (CBRN) threats; persuasion; and the autonomy of the models, each of them being valued by a low, medium, high or critical danger.

With it, OpenAI is forming a new team of experts dedicated exclusively to preventing a possible robot uprising while ensuring that its products are deployed responsibly under the supervision of Aleksander Madry, an AI researcher at MIT.

The document against the robot apocalypse comes after the earthquake experienced at OpenAI by its co-founder Sam Altman, who left the company frightened by the destructive potential of AI and who is now back. Therefore, many experts see this framework as a response to Altman's fears.