Artificial intelligence is more human: it already relates ideas just like people

Artificial intelligence can associate ideas in a similar way to the human mind.

Oliver Thansan
Oliver Thansan
25 October 2023 Wednesday 11:20
9 Reads
Artificial intelligence is more human: it already relates ideas just like people

Artificial intelligence can associate ideas in a similar way to the human mind. Two scientists, one of them from the Universitat Pompeu Fabra (UPF) in Barcelona, ​​have managed to make a neural network capable of combining new concepts with other known ones, a challenge considered impossible in the last 35 years. The advance, published on Wednesday in the journal Nature, opens up the possibility of training AI models in a more economical and accessible way.

The artificial intelligences developed so far learn by brute force. In training, they are provided with a huge amount of data so that they are able to understand and execute each of the instructions they receive. However, this training limits systems to repeating what they have learned and prevents them from having enough flexibility to link new concepts with others they already know.

If, for example, we taught an AI to “step twice” and “jump”, the machine would be unable to understand the instruction “jump twice”. In 1988 it was proposed that an artificial intelligence would never have this ability to apply known instructions to new concepts, an inherently human ability known as systematic generalization.

The challenge has survived 35 years of advances, until the article published in Nature dared to challenge it. The two scientists behind the paper, who work in the United States and Barcelona respectively, say that they have found a solution that does not involve developing an extremely innovative model, but changing the way the machine learns. In fact, they have used an identical architecture to that used by ChatGPT, but on a smaller, simpler scale.

"The network doesn't learn by repeating examples and remembering, but learns to generalize", says Marco Baroni, researcher at the Department of Translation and Language Sciences at the UPF and co-author of the study. Learning focuses on teaching him to organize concepts in a logical order.

In other words, instead of teaching the machine the meaning of "step twice" and "jump twice" separately, they showed it what "stepping", "stepping twice" means ” and “jump”. They then asked him to "jump twice" and compared the action he took with what they expected to see. By doing this with many different exercises, the AI ​​somehow understands that whenever a known slogan appears after any word, whether it knows it or not, it must apply the action to the word. You can even combine instructions you've never seen together.

To assess the success of their model, the researchers analyzed how it solves simple algebraic operations compared to humans. In the experiment, they invented a set of words and associated each one with a color or an action. Then they asked the participants and the machine to interpret the sentences made with those words and transform them into the corresponding color code.

With an edge hit of 82%, the machine achieved slightly better results than humans. In comparison, "the most recent version of ChatGPT correctly solved about 58% of the problems", notes Baroni, "much worse" than his model.

Despite the revolutionary nature of the work, scientists point out that its success is relative, since it occurs in a very limited system, that is, an AI that does not have a wealth and variety of concepts that allow it to be useful to the real world Generalizing to more complex systems shouldn't be too problematic, Baroni argues, although he acknowledges that things are never as simple as they seem.