Who is afraid of AI?

A few months ago, and in this same column, I talked to you about the economic consequences of artificial intelligence.

Oliver Thansan
Oliver Thansan
04 November 2023 Saturday 04:26
8 Reads
Who is afraid of AI?

A few months ago, and in this same column, I talked to you about the economic consequences of artificial intelligence. In short, AI applications were seen everywhere except in productivity statistics and its impact on employment will not be the catastrophe that some predict. Today I want to talk to you about the technology behind AI. The reason is that there are more and more experts on AI ethics, but it is very difficult to talk about these topics without understanding the technology behind these models.

It is clear that the enormous progress enabled by AI is associated with some risks. These risks are not the possibility of superintelligence dominating the human race nor the long-used algorithms for granting credit, personnel selection or medical triage that avoid arbitrariness and preferences guided by spurious interests. The dangers have more to do with bioterrorism, cyberweapons or influence on democratic processes. Although some think that all this discussion is a consequence of the success of ChatGPT, the reality is that this topic has been discussed for many years. The pendulum has swung from a large majority of optimists to a recent Pew Research Center study where more than 50% of respondents were more worried than excited about AI. And this sentiment extends to AI professionals. While years ago 45% considered that the impact of AI would be good or very good, in the latest survey this proportion had fallen to 30%, although it is important to add that the proportion of those who think it will have an impact has not increased. bad or very bad (15%).

These sentiments, and competition with China, are leading many countries to accelerate AI regulation. The EU is already very advanced in the regulations for the new regulation (AI Act), which is expected to be approved at the end of the year. Biden issued an executive order last week, based on a Cold War law, forcing companies with AI models that could be a danger to national security to share the procedures they use to verify they are safe. Canada already has regulations, and the G-7 is preparing a draft code of conduct. Finally, this week the first international summit on AI safety took place in the United Kingdom (Bletchley Park). It seems that no one trusts the self-regulation commitment signed by the 15 largest AI companies.

The problem with all these initiatives is that it is not clear what needs to be regulated (applications, models, chips, companies that create the models, etc.) or who has to do it. The United Kingdom and the United States opt for a government agency, while the EU wants to create a new regulatory agency and some technology executives propose a kind of intergovernmental group like that on climate change. To make these decisions it is necessary to understand the technology. Ultimately the problem is that most AI works with neural network models, pompously called deep learning, and which are large black boxes. In a very crude way, a neural network is a procedure that has multiple layers of artificial neurons connected to each other by functions, which depend on a series of initially unknown parameters, which determine whether the neuron is on or off. The training of the neural network sets the values ​​of the parameters that optimize its operation. When I played with some of these models to predict bankruptcies more than 30 years ago, their results were horribly bad. But that was playing, because then the models had few layers and a few dozen neurons and parameters, and you could interpret their reasoning. Currently GPT-4 has more than one trillion parameters, and training it has cost 100 million dollars and has represented a quantum leap in performance compared to GPT-3, which had 175,000 million and cost 4.3 million to train. It is interesting to note that it does not appear that diminishing returns have reached the size of these models. I was one of those who thought that there was a limit to the improvements that could be obtained by including more and more neurons, but really, if there is, we have not reached it yet. It will be the training and maintenance costs, which grow exponentially, that end up limiting the size of the models and favoring less complex alternatives (training smaller models with more data, truncating decimals, adjusting the models to the application or using large models to train smaller models in specialized tasks).

With this type of technology, black box model, the fact that the models are open does not in any way eliminate the risks. Nor does interpretability make much sense, or thinking about what exactly can be understood, why models generate some results and not others. The solutions involve, whether you like it or not, collaborating or regulating AI companies in their risk mitigation methods. For example, OpenAI uses two procedures. One is reinforcement learning using feedback from humans to see if the AI's responses are appropriate, and update the model to reduce the likelihood of producing harmful results. The other is to use a battery of tests to attack the model and make it say things it shouldn't say, updating it if this happens. Obviously none of these procedures are infallible. Another form of mitigation would be to use a secondary AI model as a police force that enforces some constitutional rules or principles (which is why it is called constitutional AI) of the main model. Feedback from the secondary AI model can be used for fine-tuning of the primary model.

In any case, the important thing will be to find a regulation that provides an appropriate balance between the great benefits that AI already provides and its risks. A suggestion. The history of regulation of credit granting algorithms since 1970 could serve as a guide.