A group of researchers warns that OpenAI has an AI that would threaten humanity

The crisis at OpenAI and "Sam Altman's four-day exile" took the world of artificial intelligence by surprise last Friday.

Oliver Thansan
Oliver Thansan
22 November 2023 Wednesday 15:22
6 Reads
A group of researchers warns that OpenAI has an AI that would threaten humanity

The crisis at OpenAI and "Sam Altman's four-day exile" took the world of artificial intelligence by surprise last Friday. Even the interested party himself, who was in Las Vegas enjoying the weekend on the occasion of the Formula Grand Prix 1. There he received the call about his dismissal. However, within the company there had been an open struggle for some time and several researchers, led by Helen Toner, one of the members of the governing board, sent a letter of clear confrontation against Altman's work in the development of a powerful tool, which they called super intelligence because they considered that they were one step away from achieving something that endangered humanity.

The project, advanced by Reuters, had been baptized Q* and there were those at OpenAI who thought that this discovery and its algorithm could be a great advance in the company's search for what is known as general intelligence devices (AGI). The issue consists of a set of autonomous systems that would even outperform humans in the most valuable tasks.

Basically, the creation of this tool was an approach that was formulated in 2015, when the company was founded. It was thought of as a system that could do anything like the human brain. This program is what has ultimately caused friction between those who want to gain speed, after the success of the ChatGPT chatbot launched a year ago, to achieve more commercial benefits and income, and those who warned against the looming danger.

Some OpenAI employees believe that Q* (Q-Star) could be a breakthrough in the search for what is known as artificial general intelligence (AGI), one that matches or improves average human intelligence and its performance on valuable tasks. economic. The card points out the skill and potential danger of Q*, as in an extreme case he could decide that the destruction of humanity is in his interest. On the board of directors, now dismissed, there were several members who were committed to pausing developments until they understood their possible dangers.

Among other factors, Q* would be able to solve math problems at the level of elementary school students effortlessly. They are not complex operations, but AI would show encouraging performance.

In general terms, researchers consider mathematics to be a frontier of AI development. Currently generative AI (like ChatGPT) is good at writing and language translation. In these fields the answers to the same question posed can vary widely. But achieving the ability to master mathematics, where there is only one correct answer, implies that AI has greater reasoning capabilities that would resemble human intelligence. Unlike a calculator, which can solve a limited number of operations, artificial general intelligence could generalize, learn and understand.

This could apply to novel scientific research, for example. In fact, an OpenAI team is also working on optimizing AI to perform scientific work.

In the weeks before the earthquake, Altman met with Toner, who criticized a report she had written for Georgetown University's Center for Security and Emergency Technology. She told him that this report, in which the engineering of the dangers posed by the company, was critical of OpenAI's efforts regarding the security of its advances.

Toner defended the academic work in which he analyzed the challenges that citizens face and tried to understand the intentions of companies and countries in the development of AI. “We are not on the same page about the danger of all this. Any dissent from a committee member carries a lot of weight,” Altman replied. And she insisted that they must ensure that they develop something that benefits humanity and, if not, the company must disappear.

In the midst of this dispute, and days before Altman's sudden dismissal, a group of engineers, encouraged by Toner's report, wrote a letter to the company's governance committee, do not forget that it was founded without profit and with altruistic purposes, in which they warned of the risk involved in continuing with that super intelligent tool.

The warning would join other factors that led to the dismissal, such as the commercialization of the advances that were being achieved without evaluating the risks of their use. According to the specialized media The Verge, the letter did not reach the board and progress in the company did not play any role in the dismissal.

It is clear that all this has been buried as a result of the earthquake. Almost all of the 770 employees signed another letter of support for Altman, the face of AI, to return to command or they would go to Microsoft if his signing was confirmed. Ilya Sustkeyer, a renowned AI pioneer, led the director's dismissal on the grounds that he disregarded the warnings and stopped braking. Sutskeye regretted it and ended up signing the return letter. He has left the steering committee, as has Toner, while Altman returns reinforced, imposing his thesis. Together with Altman, Microsoft, which contributed 13 billion to OpenAI, emerges as the great winner of the case by reinforcing its position as a key player in AI.

Engineers critical of Q*, who have since backtracked, reported the risk of marketing a tool without clear knowledge of its consequences. The bottom line: make the money.