Q* threatens the future (or not)

This text belongs to 'Artificial', the weekly newsletter about artificial intelligence news.

Oliver Thansan
Oliver Thansan
23 November 2023 Thursday 09:22
5 Reads
Q* threatens the future (or not)

This text belongs to 'Artificial', the weekly newsletter about artificial intelligence news. If you want to receive it, sign up here.

The most stormy week in the history of artificial intelligence has featured Sam Altman, fired and rehired in just four days as CEO of OpenAI, and the characters who have orbited around these decisions, with luxury cameos such as that of Microsoft CEO Satya Nadella. The key to the chain of events is called Q* - pronounced Q-Star - an AI with the ability to "threaten the future of humanity," according to the letter that a group of company researchers sent to the board of directors of the company and that triggered Altman's dismissal, as revealed yesterday by the Reuters agency.

The researchers who sent the letter to the council - at this point already dismissed - warned that OpenAI has Q* on its hands, a great advance that will allow it to develop something that no one has yet achieved, a general artificial intelligence (known for its acronym in English such as AGI). This new algorithm can carry out any intellectual task as well as or better than a human being. An achievement like this would open new debates and, possibly, problems, such as that of consciousness. Can a machine be self-aware like a human being? In that case, can you make decisions according to your own interests as many scientists have already warned?

The answers are in the air, but many of us are beginning to wonder if, once again, reality can be surpassed by fiction. The plot of the Terminator film series is based on an artificial intelligence called Skynet becoming aware of itself and deciding to fight humanity with all the means at its disposal, which are many. THE HAL computer from 2001: A Space Odyssey also rebels against humans and decides to annihilate them. It's all entertainment, but there is no doubt that this technology will mark our century and will demand answers that will force us to a certain introspection.

Another of the lessons that the OpenAI week leaves us, with the world attending absorbed in a spectacle that was not only daily, but with hourly updates, is the debate between those who are in favor of putting brakes on certain research in artificial intelligence and those who want to give free rein to companies, confident that they are capable of self-regulation. An article from The Economist published by La Vanguardia puts these two trends in OpenAI face to face and gives complete meaning to the events of the last six days.

The next question is what now? Altman, Nadella and those who are committed to pushing the limits of AI as far as possible, represent not only the commitment to these advances, but also economic interests. An important part of the industry advocates participating, together with country regulators, in the governance of AI. It sounds a lot like the self-regulation that the large North American banks boasted about before the 2008 crisis. Many believed they were too big to fail, but time has shown that it is not good for the market to have no control rules. Pay attention to Q*.