The theory on artificial intelligence computing gets stuck in practice

If neuromorphic computing, based on computers that work with data through patterns that work like the human brain, is so promising, why isn't it progressing faster? What is the brake that prevents exploiting the potential of these configurations to the fullest? Today, this is an intense debate between technology professionals and entrepreneurs of the digital universe.

Oliver Thansan
Oliver Thansan
08 January 2024 Monday 10:33
7 Reads
The theory on artificial intelligence computing gets stuck in practice

If neuromorphic computing, based on computers that work with data through patterns that work like the human brain, is so promising, why isn't it progressing faster? What is the brake that prevents exploiting the potential of these configurations to the fullest? Today, this is an intense debate between technology professionals and entrepreneurs of the digital universe.

In the opinion of one of the main experts on the subject, Mike Davies, director of a laboratory specifically dedicated to this issue at the multinational Intel, the problem is in the hardware. Without the appropriate machines or mature software, the benefits that neuromorphic computing can bring will be more theoretical than practical, in energy savings and latency, that is, in the time that elapses between a stimulus and the response it requires.

Deep learning neural networks are currently understood as the previous step to establishing innovation in which great expectations are placed. To promote this transformation, companies like Intel are developing tools like Loihi 1 and Loihi 2. The second version of this neuromorphic chip has inaugurated “a regime of algorithms” unprecedented in history, as Davies highlights.

Despite the origin of these devices and programs, observers maintain that the best approach is not necessarily the one that is closest to the logic and operation of the science that studies life and everything related to the organic. When certain difficulties arise, nature is more limited than technology, plays Mike Davies. For this reason, he proposes to overcome “the pure biological approach.”

The so-called “spiking neural networks” are more realistic than the classic artificial ones, since they process information more spontaneously. SpikeGPT is a generative language model trained using this system and was the subject of a comprehensive study published as a scientific article by researchers Rui-Jie Zhu, Qihang Zhao, Guopi Li and Jason K. Eshraghian.

The text aroused enormous interest, since SpikeGPT apparently solves many of the problems detected so far. With the knowledge acquired, digital giants are collaborating with multiple companies and institutions from various fields and sectors, explains analyst Sally Ward-Foxton. “I have no doubt that we are on the path to commercialization,” concludes Davies.