Researchers have made a major breakthrough in converting brain signals into audible speech with remarkable accuracy. By utilizing brain implants and artificial intelligence, the team successfully mapped brain activity to speech in patients suffering from epilepsy. This pioneering technology has the potential to restore communication capabilities to individuals who are paralyzed and unable to speak, effectively providing them with a voice. The researchers view this achievement as a significant advancement in the field of Brain-Computer Interfaces.
The researchers achieved a remarkable level of accuracy (ranging from 92% to 100%) in predicting spoken words by leveraging both brain implants and AI. Their experiments primarily concentrated on individuals without paralysis who had temporary brain implants. By analyzing the brain activity, the team successfully deciphered the verbal expressions of these subjects. Although the current scope of this technology centers around individual word prediction, the future ambition is to extend its capabilities to forecasting entire sentences and paragraphs by interpreting brain activity.
According to Julia Berezutskaya, the lead author and a researcher at Radboud University’s Donders Institute for Brain, Cognition and Behaviour and UMC Utrecht, the latest research brings forth a promising development in the Brain-Computer Interfaces domain. By utilizing brain implants in epilepsy patients, Berezutskaya and her team at UMC Utrecht and Radboud University were able to deduce the intended speech of individuals.
Berezutskaya expresses the ultimate goal of granting access to this technology to patients who are confined to a locked-in state due to paralysis, rendering them unable to communicate. According to Berezutskaya, these individuals experience a loss of muscle control which hinders their ability to speak. However, by developing a brain-computer interface, the team intends to analyze the brain’s activity and restore their capacity to express themselves verbally.
In their recent publication, the researchers conducted an experiment involving individuals without paralysis but with temporary brain implants. While monitoring their brain activity, the participants were asked to audibly articulate a series of words.
Limitation.
According to Berezutskaya, there are still several limitations at present. In our experiments, we specifically instructed participants to verbally articulate twelve words, which were the exact words we attempted to detect.
Generally speaking, predicting individual words is a less complex task compared to predicting whole sentences. In the future, the utilization of extensive language models in AI research can prove advantageous.
Our ultimate objective is to anticipate complete sentences and paragraphs solely based on the brain activity of individuals expressing their thoughts. Achieving this goal demands further experiments, enhanced implants, larger datasets, and advanced AI models.
[sourcelink link=”https://neurosciencenews.com/ai-brain-waves-words-23843/”]
