A group of engineers warn that OpenAI has destructive technology

Sam Altman's return to management silenced the warnings of the artificial intelligence (AI) apocalypses plotting within.

Oliver Thansan
Oliver Thansan
23 November 2023 Thursday 10:34
9 Reads
A group of engineers warn that OpenAI has destructive technology

Sam Altman's return to management silenced the warnings of the artificial intelligence (AI) apocalypses plotting within. But he activated the fire brigade. The party for the return of the admired boss had a smoke machine that set off the fire alarms at the San Francisco headquarters.

Thus concluded the crisis in OpenAI and the four-day "exile" of Sam Altman, who last Friday caught the world of artificial intelligence (AI) completely off guard. Even to the interested party himself, who was in Las Vegas enjoying the Formula 1 weekend.

There he received the call of his dismissal. However, within the company there had long been an open struggle and several researchers, led by Helen Toner, one of the members of the governing board (now terminated), sent a letter of clear confrontation against the work d'Altman in the development of a powerful tool, which they called "superintelligence". They considered that they were one step away from achieving something that put humanity at risk of extinction.

The project, advanced by Reuters, was dubbed Q* and at OpenAI there were those who thought this discovery could be a major breakthrough in the company's research into what is known as general artificial intelligence (IAG). The question is a set of autonomous systems that would even surpass humans.

Basically, the creation of this tool was an approach that was formulated in 2015, when the company was founded. This program is what has caused friction between those who want to print speed for more commercial profits, after the success of the ChatGPT chatbot launched a year ago, and those who warned against the potential danger.

There was a division of opinion. Some OpenAI workers believed that Q* (Q-Star) could be a breakthrough in IAG's research, which equals or improves average human intelligence and its performance in tasks with economic value. But the dissenters, in their letter, pointed to Q*'s prowess and potential danger. In an extreme case, he might decide that the destruction of humanity is in his best interest. In the board of directors, you see how Toner's was betting on interrupting the developments until they understood the possible dangers.

Generally speaking, researchers consider mathematics to be a frontier of AI development. Currently, generative AI (like ChatGPT) is good at writing and translating languages. In these fields, the answers to the same question posed to you can vary widely. But conquering the ability to master mathematics, in which there is only one right answer, implies that the AI ​​will have more reasoning abilities that would resemble human intelligence.

Unlike a calculator, which can solve a limited number of operations, artificial intelligence could generalize, learn and understand. This could apply to innovative scientific research, for example. In fact, an OpenAI team is also working on optimizing AI to do scientific work.

In the weeks before the earthquake, Altman met with Toner, whom he reprimanded for a report he had written for Georgetown University's Center for Technology Emergency and Security. He told her that report, in which the engineer emphasized the danger the company was putting its research in, was highly critical of OpenAI's security efforts.

Toner defended the academic analysis of the challenges that citizens face and sought to understand the intentions of companies and countries in the development of AI. "We are not on the same page about the danger of all this. Any dissent from a member of the committee carries a lot of weight," replied Altman. And she insisted that they had to guarantee that they were doing something for the benefit of humanity or else the company had to disappear.

Then came the warning letter, the earthquake, the dismissal, and the return of Altman, encouraged by another letter from the workers (which included some of those who had signed the previous one), in which they demanded the return. They had pressure from Microsoft, the main investor with 13,000 million and most interested in keeping OpenAI as it is.

In the end they lose the apocalyptics, who opened Pandora's box, and win the money.