Meta’s Non-Invasive Brain-Computer Interface Predicts What You Are Hearing

Meta’s Non-Invasive Brain-Computer Interface Predicts What You Are Hearing

"For some, brain-computer interfaces may be the key to restoring communication"

Artificial Intelligence without any doubt is a champion of the deprived and differently-abled. If the many medical use cases are not enough, here is one more for you. Recently Meta has announced a brain-computer interface application to write words onto the screen for people who are hearing impaired. Meta's brain-computer interface that can interpret brain waves through its non-invasive technology started developing way back in 2017 and was described as an experiment to type more than 100 words per minute without having to manually type the words or using speech-to-text transcription services.  As reported by TechCrunch, the experiment began with one question, "What if you could type directly from your brain?" Now it is at an interesting juncture of extending this technology to decipher sounds one hears. Jean-Remi King, a research scientist at Meta speaking to TIME said, "There are a host of bad things that can rob someone of their ability to speak — but for some, brain-computer interfaces may be the key to restoring communication."

Reading brain waves is not a novel concept only the difference is that it was achieved using brain implants. Experimental neuroprosthesis, a rare medical technology that could decode the brain activity of a paralyzed person was successfully developed by UCSF neurosurgeon Edward Chang from UC San Francisco. However, this was achieved through brain implants, a painful procedure only a few can withstand. Meta in its blog writes, "These devices provide clearer signals than non-invasive methods but require neurosurgical interventions. While results from that work suggest that decoding speech from recordings of brain activity is feasible, decoding speech with non-invasive approaches would provide a safer and scalable solution." The model is evaluated using EEG and MEG recording the magnetic activity of 169 healthy volunteers, made to listen to Dutch and English audiobooks for over 150 hours. As per the study published by the team as a preprint, the team used an open-source algorithm that was created to analyze already existing data.

Compared to the brain-computer interface devices, the non-invasive BCIs signal-to-noise ratio is pretty worse. The BCI technology uses sensors that are planted far away from the skull with skin in between, the output is corrupted to a great extent and only some sort of super advanced technology can pick up the signals as they are. Moreover, how the brain represents a particular language is a largely unexplained phenomenon. Meta aims to crack the above two hurdles by assigning the job to an AI system that can learn to align representations of speech and representations of brain activity in response to speech. Despite achieving encouraging results in terms of decoding speech from non-invasive recordings, it is not yet ready for practical application. Apart, Meta's BCI will have to work on extending it to work to speech production to enable patients to communicate with

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
Analytics Insight