AV¶ÌÊÓÆµ

Media

Decoding our thoughts to restore speech

A UNIGE team shows that individual training improves brain-machine
decoding of imagined speech, offering new hope for people with language disorders.

© Silvia Marchesotti. The signals generated by mental imagery are of low amplitude and therefore difficult to capture.

Brain-machine interfaces have the potential to transform care for individuals who are unable to speak. However, decoding internal language remains highly challenging due to the low-amplitude brain signals involved. By training volunteers to imagine specific syllables, a team from the AV¶ÌÊÓÆµ used machine learning algorithms to successfully decode the corresponding signals in real time. The study shows that personalized training can help individuals control these interfaces more effectively, while also identifying the brain regions involved in this improvement. Published in , this research paves the way for practical applications for people with aphasia.

Neurological disorders affecting speech and language, such as aphasia following a stroke, amyotrophic lateral sclerosis or locked-in syndrome, can severly impair or even eliminate a person’s ability to communicate. These conditions have a profound impact on quality of life. In this context, decoding imagined speech via a brain-machine interface is a major research challenge. This work highlights the previously underestimated importance of individual training in the use of brain-machine interfaces.

‘‘Initially developed to detect motor imagery, such as controlling a cursor on a screen, this technology is now being applied to decode speech,’’ explains Silvia Marchesotti, senior research and teaching assistant in the Department of Clinical Neurosciences at the UNIGE Faculty of Medicine, who co-directed the study. ‘‘However, research has mainly focused on training algorithms to classify and interpret acquired data retrospectively, with little emphasis on the individual.’’

Training the Mind

The UNIGE team explored the possibility of training individuals to better control brain-machine interfaces. To achieve this, they aimed to decode, in real time the neurophysiological signals generated by the brain when imagining the pronunciation of language elements. This presents a significant challenge, as mental imagery produces low-amplitude signals that are difficult to detect. Additionally, pinpointing the exact moment when a person begins to imagine syllables or words is complex, especially since this ability varies from one individual to another.

‘‘Recent studies have shown that it is possible to decode attempted speech in patients who have lost the ability to speak due to a motor disorder. However, this is not feasible for people with aphasia because of the location of their brain damage. That is why we have chosen to focus on imagined speech,” explains Anne-Lise Giraud, professor in the Department of basic neurosciences at the UNIGE Faculty of Medicine and director of the Hearing Institute, Institut Pasteur center, who co-directed the study.
 

© Sivlia Marchesotti. Participants connected to electrodes received immediate feedback on their imagery performance through a gauge displayed on the screen.

Fifteen healthy volunteers trained for five consecutive days using a brain-machine interface that decoded electroencephalography (EEG) signals associated with imagining two syllables (‘‘fo’’ and ‘‘gi’). Participants, connected to 61 electrodes, received immediate feedback on their imagery performance through a gauge displayed on the screen. The clearer their mental representation of the syllables, the more the gauge filled up, providing real-time insight into the quality of their mental imagery. This experiment was made possible by analyzing brain signals in real time using machine
learning algorithms.

Significant Improvement

Despite considerable variability in performance and learning across individuals, a significant overall improvement in controlling the interface was observed. A control experiment with a group of volunteers receiving irregular visual feedback confirmed that only continuous feedback on decoded brain activity — as in the main experiment — enabled effective learning. This learning process was accompanied by changes in neural activity associated with speech.

‘‘The improvement in performance was linked to an increase in EEG power in the frontal region, specifically associated with theta waves, as well as a focal enhancement in the left temporal region associated with gamma waves,’’ says Kinkini Bhadra, a postdoctoral researcher in the Department of basic neuroscience at the UNIGE Faculty of Medicine and first author of the study.

This work highlights the previously underestimated importance of individual training in the use of brain-machine interfaces. It also identifies the brain regions involved in the production of imagined speech, a crucial factor for optimizing electrode placement in future interfaces. The next phase of the study will involve applying this method to individuals with aphasia, with the aim of developing a tool to accelerate their recovery. This research will be conducted in collaboration with the Neuro-Rehabilitation Department at Geneva AV¶ÌÊÓÆµ Hospitals (HUG).

24 Mar 2025

Media