Altman, OpenAI and the will to power

AI is more than just an acronym.

Oliver Thansan
Oliver Thansan
24 November 2023 Friday 03:22
4 Reads
Altman, OpenAI and the will to power

AI is more than just an acronym. It summarizes in two letters the tension of our time. It contains all the factors that lead us towards one of those decisive moments of humanity. A moment that requires a precise diagnosis of what is hidden on the surface of the news about AI, since detecting its keys can help us anticipate the inevitable trends that innovation is releasing about it.

The battle that has been fought in OpenAI around the figure of Sam Altman is a reflection of this. The same as the desperate reaction of many technologists, who promoted a manifesto in March calling for a moratorium on research into generative AI. And what can we say about the presidential executive order from the White House a few weeks ago, which is justified by the extraordinary urgency of governing both the development and use of AI in accordance with safety and responsibility criteria? These are terms that invite alarm and that also endorse, with less discursive gravity, the Bletchley declaration, approved by the British Government at the beginning of the month.

This declaration, signed even by China, invokes, after once again highlighting the enormous opportunities it holds for well-being, peace and prosperity, the urgency of global public-private supervision of AI research. Emphasizing the frontier, that is, the one that is at the forefront of innovation in the generative field. Debate that also shakes Europe when the final stretch of the negotiation that takes place in the trilogues that precede the imminent approval of the European AI regulation is elucidated.

What is happening behind this news? That AI is entering a moment of no return regarding the capabilities it is acquiring and that can make definitive progress viable towards a strong or general AI much sooner than expected. That is, an AI with cognitive capabilities similar to our common sense and our consciousness, although with a statistical intelligence behind it that is infinitely superior to ours. It was speculated that it would be achieved in 2050, but this possibility may be being brought forward two decades.

A phenomenon that is produced by the pressure of global geopolitical warming, fueled by the fierce competition waged by China and the United States for technological hegemony. The Chinese, according to a verticalized planning of AI research where the State controls the entire process. The Americans, through a horizontal competition between the famous Gafam (Google, Amazon, Facebook-Meta, Apple and Microsoft), as reflected by the change of direction given by Microsoft in OpenAI.

Let us remember that the North American design of technological innovation is based on the winner-takes-all principle. A neoliberal model that has worked successfully since the birth of the digital market in the US and that has allowed efficient competition between monopolies, which is what Gafam are. The problem is that this competition can be broken if one company hegemonizes the disruptive change that AI is experiencing. Which is what can happen with Microsoft's definitive control over OpenAI.

This company was born without profit and with an open, collaborative and ethical approach. He wanted to develop a generative AI governed by these principles, but driven by the same utopian logic that since Alan Turing has been the driving force of research in this field: to reproduce a human brain without the defects that make it so often fail. The advances made by OpenAI in a short time, through its applied deep learning models, have developed a prototype like ChatGPT, which has begun to change things. To the point of being very close to offering generative AI at such a low access price that it could drive any competitor out of the analogue services market.

A commercial initiative that could generate a monopoly capable of projecting itself over the entire digital market. Something glimpsed on November 6 when Altman announced the launch of a multi-sided AI design with a GPT-4 turbo at the forefront and an App Store at the rear to monetize the cross-service offering. This decision sparked battle within OpenAI. On the one hand, those who wanted to keep the company within its original approaches to stop research and be able to ethically supervise it in the face of the increase in risks associated with progress in the generative capabilities of the AI ​​systems developed. And on the other, those, with Altman at the helm, want to take the investigation to its ultimate consequences. Something that would allow OpenAI to achieve the click that produces a strong AI that gives its main shareholder, Microsoft, a monopoly of the digital ecosystem. A battle that Altman and Microsoft have won.

An outcome that, curiously, occurs a month after the presidential order was approved that attributes to the tenant of the White House the status of AI Commander in Chief  and that must be related to the control of supervision over innovation in AI established in 2022 by the Chips and Science law. This being the case, it is not surprising that the presence of Larry Summers, former Secretary of State for the Treasury and former rector of Harvard, on the OpenAI board helps us understand what happened within the aforementioned geopolitical key. A key that is already decisive. It reflects a will to power around the existence of an “innovative-industrial complex” on AI that is determinedly fighting for the United States to be the first to achieve strong AI in 2030.