Artificial intelligence goes to the psychoanalyst

In the 1950s, the behaviorist paradigm in psychology – which studied only observable behavior – was replaced by the “cognitivist paradigm”, interested in the mental processes that took place in the “black box” of our brain.

Oliver Thansan
Oliver Thansan
22 May 2023 Monday 13:22
6 Reads
Artificial intelligence goes to the psychoanalyst

In the 1950s, the behaviorist paradigm in psychology – which studied only observable behavior – was replaced by the “cognitivist paradigm”, interested in the mental processes that took place in the “black box” of our brain. Computer technology was booming and the desire to simulate cognitive processes by means of a machine forced them to describe them in a more precise way. The computer was a stupid machine that was limited to following the instructions that the programmer gave it, which required the programmer not to skip any step, no matter how trivial it seemed.

Current advances in artificial intelligence (AI), especially ChatGPT, are also allowing a better understanding of how the human brain works, which is actually a GPT (Generative Pre-Training Transformer) system. Like the human brain, this technology generates texts based on gigantic data banks and syntactic procedures to handle them. The machine does not understand what it is doing, and this also happens to our brain. The mechanisms that prepare a sentence –or any mental phenomenon– are not conscious. The brain does not know the sentence it is forming. We are aware of it at the moment of enunciation. That is why E.M. Forster was right when he made one of his characters say: "How am I to know what I think, if I haven't said it yet?"

The ChatGPT 3 and 4 programs work by reinforcement learning, just like us. It is the procedure studied by B. F. Skinner, for many experts the most influential psychologist of the 20th century, followed by Piaget and Freud. The most radical change in artificial intelligence occurred when the programmers, instead of giving the machine instructions, offered it rewards (reinforcements) so that it managed to achieve them.

The experts in this difficult subject refine a lot and distinguish between prizes and values. Both have the end character –albeit in the form of a number in an equation– and, so far, that end is provided by the programmer. By themselves, programs do not have goals. I find the book Reinforcement learning: an introduction, by Andrew Barto (from the Autonomous Learning Laboratory) and Richard S. Sutton (Deep Mind researcher) to be a good introduction to this topic.

When a program does not follow instructions, but seeks to win prizes, we can stop knowing what the machine is doing, what data it has handled, what patterns it has found, what transformations and extrapolations it has done. When you ask AI experts for “algorithmic transparency”, you are making an impossible request to satisfy. The OpenIA researchers who designed ChatGPT have just admitted that they don't know how the program makes decisions. It is inevitably opaque. This is what has given many a shudder of panic, and I want to explain why it hasn't given me: because our brains do exactly the same thing.

We do not know how we make decisions, why we come up with some things instead of others, where our preferences and desires come from, for example sexual orientation or identity. Sigmund Freud wrote: “All my life I have tried to be honest. I do not know why". To find out, he invented psychoanalysis. He wanted to discover the unconscious loom where our ideas and feelings were woven.

The current theories on intelligence admit an unconscious, although not Freudian. If you are interested in this "new unconscious", you can read the works of John Bargh. This expression designates the set of operations through which the brain captures and handles information. Little by little we are discovering how it does it, but it is as opaque as the GPT programs. What happens is that the human brain has invented a security system that allows it not to trust this prodigious work of our “cognitive unconscious”. Part of its results go into a conscious state and from there we can subject them to a reliability test. Where did you get the data from? How do we know if the process is reliable? Can we reproduce it? Artificial intelligence lacks that top layer, and we must provide it ourselves. Daniel Kahneman, Nobel Prize winner in Economics, has called these two levels of human intelligence System 1 (non-conscious, automatic, fast, efficient, but unreliable) and System 2 (reflexive, rational, slow, reliable). In my books I defend a similar theory of intelligence, but I call these levels "generative or computational intelligence" and "executive intelligence."

I will give an example of this model of dual intelligence. Henri Poincaré, considered the great mathematician of his time, recounted that, fed up with dealing with a complicated problem, he decided to quit his job and distract himself with a trip. At one point in the excursion, when he was not thinking about the equations, the solution appeared in his consciousness. That spontaneous appearance intrigued him. If he hadn't been consciously thinking about the problem, who had solved it? His conclusion was that it had been his tireless "cognitive unconscious", which he regarded ever since as the source of mathematical creations. There was, however, a problem. Those creations could be wrong. He had to subject them to conscious criticism before accepting them as true.

This is our situation before artificial intelligence. If we want to be rigorous, we will have to somehow corroborate the reliability of their results. That requires strengthening critical thinking. The more powerful the AI ​​mechanisms are, the more powerful the critical thinking that evaluates them will have to be. In the same way that Freud wanted to take his patients to the couch to try to discover the origin of their dreams or his ideas, we will have to take the AI ​​to the psychoanalyst.

The great danger is the "hardening" of human intelligence. The boasts of AI are so prodigious that we can delegate essential functions to it. Harari and Fukuyama fear poisoning from “easy”. We are in fact witnessing in the intellectual world a weakening of critical thinking, which makes us more dependent on machines. We make a naive mistake if we imagine this situation as a science fiction movie in which humans end up being servants of machines. No. Humans can only be the servants of other humans using machines.

Despite the progress of “autonomous systems”, their autonomy is limited, not only for reasons of the energy supply they need, but because their preference systems have to be designed by humans, as I mentioned before. What we call artificial intelligence is actually a human component machine hybrid. We must not fall for the scam of the autonomy of artificial intelligence, because if we end up convincing ourselves that machines are autonomous and all-powerful, we will provoke a “prophecy that is fulfilled by the fact of enunciating it”. We will tremble before the machines, instead of trembling before the people who use the machines. In other words, we will leave the field free