Thanks to this technique we can listen to what a person who cannot speak has to say

Thanks to this technique we can listen to what a person who cannot speak has to say

Using computational models known as neural networks, a group of researchers has reconstructed words and sentences that, in some cases, were intelligible to human listeners and that were intended to be verbalized by various patients.

The development of this technique makes it possible to "listen" to people who cannot speak due to some disease simply by deciphering the data that they recorded in the brain of the patients. That is, it was enough for the person to imagine what he wanted to say .

Future voice prostheses

People who have lost the ability to speak after a stroke or illness can use their eyes or make other small movements to control a cursor or select letters on the screen, just as Stephen Hawking did . But if a brain-computer interface could recreate speech directly, they could regain much more than speech: also control over the pitch and inflection of what they want to say, for example, or the ability to carry out a more fluent conversation.

This little miracle has been realized thanks to the data with which the artificial neural networks that process complex patterns of the brain have been trained. For the study , they relied on data from five people with epilepsy .

The network analyzed recordings of the auditory cortex (which is active during both speech and hearing) as those patients listened to recordings of stories and people naming digits from zero to nine. Then a computer reconstructed the spoken numbers from neural data alone; When the computer "spoke" the numbers, a group of listeners identified them with 75% accuracy .

The brain signals when a person silently "speaks" or "listens" to his voice in his head are not identical to speech or hearing signals. Without an external sound that matches brain activity, it can be difficult for a computer to even determine where internal speech begins and ends.

However, impressive steps are being taken in this regard , to perhaps one day create an artificial voice prosthesis. Another example in this regard is the one starring neurosurgeon Edward Chang and his team at the University of California, San Francisco, who reconstructed entire sentences from captured brain activity from speech and motor areas while three patients with epilepsy read aloud. high. In an online test, 166 people listened to one of the sentences and had to select it from 10 written options. Some sentences were correctly identified more than 80% of the time.

Another great leap in the quality of these techniques would perhaps happen by providing feedback to the user of the brain-computer interface : if the user can listen to the interpretation of the computer’s speech in real time, they could adjust their thoughts to obtain the result they want. .

Previously this type of study focused on trying to understand the muscle movements that end up producing words, however, this natural language production involves more than a hundred muscles and also a movement does not always result in a sound. For Chethan Pandarinath and Yahia Ali , experts in biomedical engineering at Emory University, "the approach of these authors results in less acoustic distortion than previous decoding systems."

Little by little, then, bolder studies are leading us towards the possibility that people who have difficulty speaking can speak with greater fluency, perhaps in the near future indistinguishable from normal speech .