The promise of a better past

In just a few years, AI has gone from being a field of computing and cognition research to being a marketing tag.

Oliver Thansan
Oliver Thansan
08 April 2023 Saturday 15:41
15 Reads
The promise of a better past

In just a few years, AI has gone from being a field of computing and cognition research to being a marketing tag. When AI had almost become an advertisement on television, in a few months we have seen how it mutated into a pop phenomenon. The merit, or the blame if you want, lies with OpenAI, its huge GPT language model and its ubiquitous ChatGPT. Created in 2015, it aims to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by the need to generate financial return." The AI ​​NGO. Bravo!

OpenAI was just six months ago like the Google of 1998; the epitome of a start-up that wanted to change the world by doing things differently, a likeable company. But in 2001 Google realized that if it could establish a causal relationship between what users searched for and what was happening in the world, it was in a position to predict the future. Google began using the data from our activity to train its AI in order to know what our next click would be. It is the birth of “surveillance capitalism”.

The other big moment is when Facebook realized in 2012 that it can influence people's mood. In an infamous social experiment, the company studied the behavior of one group of users to whom it showed predominantly negative news and to another, rather positive news. The conclusion of the study is that, also online, there is an effect of emotional contagion that influences our decisions. Trump and Brexit are some of its consequences.

We return to the nice OpenAI. Since the laws of physics are the same for everyone and using everything on the internet to train a deep neural network of billions of parameters is worth a lot of money, the company hit a paywall. The costs of such an undertaking are only within the reach of large technology corporations. This explains why Meta, Google, Amazon, NVIDIA and the Chinese Baidu and Huawei have huge language models similar to GPT and Jijonenca, no (Apple's silence makes everyone suspect that it is also working). To the costs must be added that of maintaining a free chat for more than one hundred million users, which is estimated at $100,000 per day. Microsoft to the rescue.

Microsoft very discreetly invested 1,000 million dollars in 2019 – Bill Gates knew them since 2016 – and has confirmed his investment of another 10,000, in addition to making its cloud available to OpenAI. In exchange, the exclusive use of the GPT language model for your Bing search engine (if you download Microsoft's Edge browser you can try it for free). Besides, in order to cover the high operating costs, OpenAI has made ChatGPT-4 a subscription to 20 dollars a month (3.5 is still free). Suddenly the cute company that everyone equated with a non-profit research center became a very for-profit company, no longer publishing the details of the investigation, and going with one of the bad guys. the movie. It has only taken OpenAI a few months to take the Skynet site away from Google.

Sam Altman, the young chief executive of OpenAI, tweeted in early December in the wake of the release of ChatGPT that "it's incredibly limited, but good enough at a few things to create a misleading impression of greatness." Lately he has been "a little scared" by the technology that his company is putting in everyone's hands. Take it with a pinch of skepticism: in these statements there is reality – the system is very advanced and capable of passing entrance exams to practice law – but there is also marketing of the apocalypse. The OpenAI CTO had already expressed the need for AI regulation long before the open letter signed by industry leaders calling for a moratorium on training systems like GPT.

There is a lot of confusion in all of this: in the capabilities of ChatGPT –it is credited with an intelligence it does not have–, in OpenAI's demand for regulation –which would serve to shore up its dominant position– and in its marketing of the apocalypse –focusing on the long term allows you to avoid talking about current ethical issues. There is also a lot of confusion in the open letter that scientists, engineers and businessmen have signed. It doesn't help that there are bombastic characters like the renowned apostle of the apocalypse Yuval Harari or the renowned Twitter troll Elon Musk. The latter has pending accounts with Sam Altman since he jumped from OpenAI when he couldn't take control. The absence of Chinese scientists, engineers and businessmen is also very striking. It also doesn't help that the document talks about the "end of civilization" and that in some passages it seems like the introduction of Terminator.

Don't get me wrong. Although I don't share the form, I do share the background and I strongly agree with Sam Altman when he says that ChatGPT can “create a deceptive impression of grandeur”. Since everything it generates is very plausible, too often we take it for granted and this has implications for the future… but also for the past. Colleague and friend Albert Cuesta entered the first four paragraphs of a text of his published in the newspaper Ara into the Bing search engine – open to everyone and used by GPT-4. The search engine replied that the author was Javier Marías and that the article had been published in El País! "So far the experiment," said Cuesta on Twitter.

It may seem like an anecdote, but it is not. If we are not careful, we will not even realize it and we will already be rewriting our past in a way analogous to that of Orwell's Ministry of Truth. The ChatGPT has been banned in China because what it answers about Tiennanmen, Tibet or the Uyghurs does not correspond to the official version. It seems that in this new iteration surveillance capitalism not only promises us a better future, it also promises us a better past. If this does not happen, ChatGPT will surely be able to change the conclusion of this article in a few years.