People who have lost their speech can communicate with brain implants

Two American women suffering from a paralysis that prevents them from speaking have been able to communicate with their environment again thanks to brain implants that decode neural activity and translate it into words.

Oliver Thansan
Oliver Thansan
22 August 2023 Tuesday 22:22
3 Reads
People who have lost their speech can communicate with brain implants

Two American women suffering from a paralysis that prevents them from speaking have been able to communicate with their environment again thanks to brain implants that decode neural activity and translate it into words. These are two technologies presented this Wednesday in separate articles in the journal Nature, which facilitate communication with a speed, precision and richness of language unprecedented up to now.

A team from the University of California, San Francisco (USCF), in the United States, demonstrated in 2021 that it was possible to decode the brain signals that a person produces when trying to speak and transform them into text. His first attempt allowed a severely paralyzed man to communicate with a vocabulary of 50 words. The system showed that translation was possible, but it was limited: it was wrong one out of four times and transcribed the signals at a rate of 18 words per minute, much slower than in normal conversation, where the speed is about 160.

The implants presented this Wednesday by UCSF itself and Stanford University, both in the United States, multiply the speed and richness of communication. The former have achieved a rate of 78 words per minute formulating sentences with a vocabulary of more than 1,000 different terms, while the latter have reached 62 words per minute, but with a vast vocabulary of 125,000 words. The results bring closer the possibility that people who have lost their voice can maintain fluent conversations with their environment and "are a true milestone in the field," says Edward Chang, a UCSF neurosurgeon and leader of one of the investigations.

Both technologies collect the neural activity that should activate the patients' tongue, pharynx, jaw and facial muscles, allowing them to speak if they were not paralyzed, and use artificial intelligence to transform these signals into words. However, the groups differ in the way they collect the data and train the AI.

While scientists at the University of California have read the neural activity of all cells on the surface of the brain, the Stanford team has inserted electrodes inside the patient's cerebral cortex to read neuron-by-neuron activity. The fact that both approaches have given similar results fills researchers with optimism. "The most important message is that there is hope, that this will continue to improve and that it will provide a solution in the coming years," concludes the UCSF neurosurgeon. For now, both technologies are purely experimental.

To translate the signal into words, research teams and patients trained individual artificial intelligences for hundreds of hours. Stanford University asked its volunteer to repeat over 10,000 different phrases randomly taken from telephone conversations over 25 days. The algorithm was capable of translating neural impulses into words from a vast vocabulary of more than 125,000 terms, making mistakes in 24% of them. Despite the fact that the failure rate is high, it is the same as two years ago with a much poorer language, of only 50 words. The new technology was wrong only one out of ten times with this very small vocabulary.

Instead, the University of California opted to train artificial intelligence by repeating phrases from a vocabulary of about 1,000 words over and over again. With this they managed to make the system err only 5% of the terms when trying to verbalize phrases from a repertoire of 50 statements. However, for new formulations, the error occurred again in one out of every four words.

The future of both investigations lies in reducing the error rate and improving precision, something that can be achieved, a priori, by increasing the number of electrodes that read neuronal activity. The researchers from both groups also point to the importance of developing WiFi devices that allow the transmission of neural activity to the computer without the patients having to be physically connected to it.

“It is as if we were in the years of analog television,” explains Jaimie Henderson, a neurosurgeon at Stanford University and leader of the research carried out by the center, at a press conference. “We have to continue to improve the resolutions to get to HD first, and 4K later,” she concludes.