Artificial intelligence goes to the psychoanalyst

In the 1950s, the behaviorist paradigm in psychology - which studied only observable behavior - was replaced by the "cognitivist paradigm", interested in the mental processes that took place in the "black box" of our brain.

Oliver Thansan
Oliver Thansan
22 May 2023 Monday 14:18
3 Reads
Artificial intelligence goes to the psychoanalyst

In the 1950s, the behaviorist paradigm in psychology - which studied only observable behavior - was replaced by the "cognitivist paradigm", interested in the mental processes that took place in the "black box" of our brain. Computing technology was booming and the desire to simulate cognitive processes by means of a machine forced to describe them in a more precise way. The computer was a stupid machine that limited itself to following the instructions that the programmer gave it, which required it not to skip any step, no matter how trivial it seemed.

Current advances in artificial intelligence (AI), especially ChatGPT, are also allowing a better understanding of how the human brain works, which is actually a GPT (Generative Pre Training Transformer) system. Like the human brain, this technology generates texts based on gigantic data banks and syntactic procedures to manipulate them. The machine does not understand what it is doing, and this also happens to our brain. The mechanisms that prepare a sentence – or any mental phenomenon – are not conscious. The brain does not know the sentence it is forming. We are aware of it at the time of the statement. That's why E.M. Forster was right when he made one of his characters say: "How will I know what I think, if I haven't said it yet?"

The ChatGPT 3 and 4 programs work by reinforcement learning, just like us. It is the procedure studied by B. F. Skinner, for many experts the most influential psychologist of the 20th century, followed by Piaget and Freud. The most radical change in artificial intelligence occurred when the programmers, instead of giving instructions to the machine, offered it prizes (reinforcements) so that it contrived to achieve them.

Experts in this difficult subject fine-tune and distinguish between prizes and values. Both have the character of purpose – albeit in the form of a number in an equation – and, until now, this purpose is provided by the programmer. By themselves, programs have no goals. I think the book Reinforcement learning: an introduction, by Andrew Barto (of the Autonomous Learning Laboratory) and Richard S. Sutton (researcher of Deep Mind) is a good introduction to this topic.

When a program does not follow instructions, but seeks to achieve prizes, we can no longer know what the machine is doing, what data it has handled, what patterns it has found, what transformations and extrapolations it has made. When AI experts are asked for “algorithmic transparency,” they are making an impossible request. The OpenIA researchers who designed the ChatGPT have just admitted that they don't know how the program makes decisions. It is inevitably opaque. This is what has given many people a shudder of panic, and I want to explain to you why it didn't happen to me: because our brains do exactly the same thing.

We don't know how we make decisions, why some things come to mind instead of others, where our preferences and desires come from, for example sexual orientation or identity. Sigmund Freud wrote: “All my life I have tried to be honest. I don't know why." To find out, psychoanalysis was invented. I wanted to discover the unconscious loom where our ideas and feelings are woven.

Current theories of intelligence admit a non-Freudian unconscious. If you are interested in this "new unconscious", you can read the works of John Bargh. This expression designates the set of operations through which the brain captures and manages information. We're slowly figuring out how it does it, but it's as opaque as GPT programs. What happens is that the human brain has invented a security system that allows it not to trust this portentous work of our "cognitive unconscious". Part of its results pass to a conscious state and from this we can subject them to a reliability test. Where did you get the data from? How do we know if the process is reliable? Can we reproduce it? Artificial intelligence does not have this top layer and we have to provide it. Daniel Kahneman, Nobel laureate in economics, has called these two levels of human intelligence System 1 (not conscious, automatic, fast, effective, but unreliable) and System 2 (reflective, rational, slow, reliable). In my books I defend a similar theory of intelligence, but I call these levels “generative or computational intelligence” and “executive intelligence”.

I will give an example of this model of dual intelligence. Henri Poincaré, considered the greatest mathematician of his time, explained that, tired of trying to solve a complicated problem, he decided to quit his job and distract himself with a trip. At one point during the excursion, when he was not thinking about the equations, the solution appeared in his consciousness. That spontaneous appearance intrigued him. If he hadn't been consciously thinking about the problem, who had solved it? His conclusion was that it had been his tireless "cognitive unconscious", which he considered, from then on, the source of mathematical creations. There was, however, a problem. These creations could be wrong. It was necessary to subject them to conscious criticism before accepting them as true.

This is our situation in the face of artificial intelligence. If we want to be rigorous, we will have to corroborate in some way the reliability of their results. This requires strengthening critical thinking. The more powerful the AI ​​mechanisms, the more powerful the critical thinking that evaluates them must be. In the same way that Freud wanted to bring his patients to the couch to try to discover the origin of their dreams or their ideas, we will have to bring AI to the psychoanalyst.

The great danger is the laziness of human intelligence. AI's displays are so prodigious that we can delegate essential functions to it. Harari and Fukuyama fear an intoxication of "facility". In the intellectual world we are, in fact, witnessing a weakening of critical thinking, which makes us more dependent on machines. We fall into a naive error if we imagine this situation as a science fiction movie in which humans will end up being serfs of the machines. No. Humans can only be serfs to other human beings who use the machines.

Despite the progress of “autonomous systems”, their autonomy is limited, not only for reasons of the energy supply they need, but because their preference systems must be designed by humans, as I mentioned before. What we call artificial intelligence is actually a hybrid of a machine and a human component. We must not fall for the scam of the autonomy of artificial intelligence, because if we end up convincing ourselves that machines are autonomous and all-powerful, we will provoke a "prophecy that is fulfilled by the fact of enunciating it". We will tremble before the machines, instead of trembling before the people who use the machines. In other words, we will leave the field free for them.