The fight between 'doomers' and 'boomers' for control of AI: what the dismissal of Sam Altman reveals

Even by the pace of the tech world, the events of the weekend of November 17 have been unprecedented.

Oliver Thansan
Oliver Thansan
21 November 2023 Tuesday 09:21
18 Reads
The fight between 'doomers' and 'boomers' for control of AI: what the dismissal of Sam Altman reveals

Even by the pace of the tech world, the events of the weekend of November 17 have been unprecedented. On Friday, Sam Altman, co-founder and head of OpenAI, the company at the forefront of the artificial intelligence (AI) revolution, was suddenly fired. The reasons why the company's board of directors stopped trusting Altman are unclear. Rumors point to concerns about its side projects and fears that it was moving too quickly to expand OpenAI's commercial offering without considering security implications, and this at a company that has also committed to develop technology for the "maximum benefit of humanity." Over the next two days, investors and some of their employees have sought Altman's reinstatement.

However, the board of directors stuck to their guns. Late on Nov. 19, he named Emmett Shear, former CEO of Twitch, a video streaming service, as interim chief executive. Even more extraordinary, the next day, CEO of Microsoft, one of OpenAI's investors, posted on X (formerly Twitter) that Altman and a group of OpenAI employees will join the software giant to lead a "new team." of advanced research in AI".

What happened at OpenAI is the most dramatic manifestation of a broader divide running through Silicon Valley. On the one hand, there are the doomers, catastrophists convinced that uncontrolled AI poses an existential risk to humanity and therefore advocate stricter regulation. Opposite them are the boomers, who downplay fears of an AI-driven apocalypse and underline its potential to accelerate progress. Whichever side ends up being more influential will encourage or block stricter regulations, which in turn will determine who will benefit the most from AI in the future.

OpenAI's corporate structure straddles both sides. Founded on a nonprofit basis in 2015, the company created a for-profit subsidiary three years later to fund the expensive computing capabilities and brainpower needed to drive the technology. Meeting the competing goals of doomers and boomers was never going to be an easy task.

In part, the split reflects philosophical differences. In the doomer camp, many are influenced by “effective altruism,” a movement concerned about the possibility of AI wiping out all of humanity. Among those who worry about that eventuality is Dario Amodei, who left OpenAI to found the startup Anthropic, another model maker. In addition, other large technology companies, such as Microsoft and Amazon, also show concern about the safety of AI.

Boomers advocate a worldview called “effective accelerationism,” according to which the unimpeded development of AI should not only be allowed, but accelerated. At the forefront of such a position is Marc Andreessen, co-founder of Andreessen Horowitz, a venture capital firm. Other top AI specialists seem to sympathize with the cause. Meta's Yann LeCun and Andrew Ng and many startups (including Hugging Face and Mistral IA) have advocated for less restrictive regulation.

Altman seemed to enjoy the sympathies of both groups; has publicly called for “guardrails” to make AI safe, while also pushing OpenAI to develop more powerful models and launching new tools, such as an app store for users to build their own chatbots. Its largest investor, Microsoft, which has invested more than $10 billion in OpenAI in exchange for a 49% stake without receiving any seat on the parent company's board of directors, is reportedly not happy with what has happened. , since he found out about the dismissal just a few minutes before Altman himself. For that reason, he has offered shelter to Altman and his colleagues.

However, it seems that not everything is philosophical abstractions. It turns out that the two groups are also divided along more commercial lines. Doomers are pioneers in the AI ​​race, have deeper pockets, and advocate proprietary models. The boomers, on the other hand, tend to be companies that are somewhat further behind, smaller and with a preference for open source software.

Let's start with the initial winners. OpenAI's ChatGPT added 100 million users in just two months after its launch, closely followed by Anthropic, founded by OpenAI defectors and now valued at $25 billion. Google researchers wrote the original paper on massive language models, software that is trained on vast amounts of data and powers chatbots, including ChatGPT. The company has been producing larger and smarter models, as well as a chatbot called Bard.

Microsoft's advantage, for its part, is largely based on the firm commitment to OpenAI. Amazon plans to invest up to $4 billion in Anthropic. However, in the tech world, being the first to move doesn't always guarantee success. In a market where both technology and demand are advancing rapidly, new entrants have many opportunities to disrupt the position of incumbents.

That may add strength to the pressure from doomsayers for stricter rules. In a May appearance before the US Congress, Altman expressed fears that the industry could "cause significant harm to the world" and urged policymakers to enact specific rules for AI. That same month, a group of 350 scientists and executives from companies in the sector (OpenAI, Anthropic and Google, among others) signed a one-sentence statement in which they warned of a "risk of extinction" posed by AI comparable to war. nuclear and pandemics. As scary as those prospects are, none of the companies that endorsed the statement stopped their own work toward creating more powerful AI models.

Politicians are quick to show that they take the risks seriously. In July, President Joe Biden's administration urged seven major model makers (including Microsoft, OpenAI, Meta and Google) to make "voluntary commitments" to have their AI products inspected by experts before they go to market. On November 1, the British government got a similar group to sign another non-binding agreement that allowed regulators to check the reliability of AI products and their harmful capabilities (such as endangering national security). A few days earlier, Biden had issued a much stronger executive order. He forces any AI company that builds models above a certain size (defined by the computing power the software needs) to notify the government and share the results of its security tests.

Another fault line between the two groups is the future of open source AI. Massive language models have been proprietary (such as those from OpenAI, Anthropic, and Google) or open source. The February release of Llama, a model created by Meta, spurred activity in open source AI (see chart). Supporters argue that open source models are more secure because they are open to scrutiny. Critics worry that making these powerful AI models public will allow bad actors to use them for malicious purposes.

However, the dispute over open source may also reflect commercial motives. Venture capitalists, for example, are big fans of it, perhaps because they see it as a way for the startups they back to catch up and get to the front lines, or get free access to the models. Traditional operators may fear the competitive threat. A memo written with insider information by Google workers and leaked in May admits that open source models obtain comparable results on some tasks to their proprietary cousins ​​and cost much less to build. The memo concludes that neither Google nor OpenAI have any defensive "moat" against open source competitors.

So far, regulators appear to have been receptive to the doomsayers' argument. Biden's executive order could curb open source AI. The order's broad definition of "dual-use" models (those that can be used for military or civilian purposes) imposes complex reporting requirements on manufacturers of such models, which could eventually cover open source models as well. It is not clear to what extent these standards can be applied today. However, they could gain strength over time; for example, if new laws are passed.

Not all big tech companies fall clearly on one side or the other of the divide. Meta's decision to open source its AI models has made it an unexpected advocate for startups, giving them access to a powerful model on which to build innovative products. Meta is betting that the rise of innovation driven by open source tools will end up helping the company generate new forms of content that keep users hooked and advertisers happy. Apple is another outlier. The largest technology company in the world is notably silent on AI. At the launch of a new iPhone model in September, the company presented numerous AI-based features without mentioning the term. When asked, its executives are inclined to praise "machine learning," another term for AI.

It seems like a smart stance. The OpenAI crisis highlights how damaging culture wars around AI can be. Now, it is these wars that will determine how technology progresses, how it is regulated... and who gets the spoils.

© 2023 The Economist Newspaper Limited. All rights reserved

Translation: Juan Gabriel López Guix