Artificial Intelligence Helps Researchers Create Speech from Brain Signals

Artificial Intelligence Helps Researchers Create Speech from Brain Signals

Recently, three universities' research team published studies about the usage of artificial intelligence (AI) particularly neural networks to create speech from brain signals. The studies depicted convincing outcome carried out by the production of identifiable sounds up to 80 percent of the times. The process of research included the following steps at the initial level:

•  Participants first had their brain signals measured while they were reading aloud or listening to some specific words.

•  Then the data was transferred to the neural network for learning interpretation of brain signals post which the sound was structured for listeners to recognize.

•  The outcome showed hopeful future of the Brain-Computer Interface (BCIs) in the orbit of which thought-based communication is making a shift from the domain of fiction to present day's reality.

The Concept Behind Human Brain Audio Decoding

The idea of linking the human brain to technology is not new. In recent years, several breakthroughs have been generated which includes empowering paralyzed people to operate tablets using their brain waves.

Additionally, SpaceX CEO Elon Musk significantly leaned spotlight to the subject with Neuralink which is his BCI company. The company aspires to amalgamate human sensibility with the virtue of technology.

The BCI technology will undoubtedly extend and generates new paths to nurture Brain-Machine communication.

Remarkable Observations and Revelations by the Studies
First Study

•  The first study was conducted by researchers from Columbia University and Hofstra Northwell School of Medicine, both based in New York.

•  During the research, the brain signals from the auditory cortexes of five participants (suffering from epilepsy) were recorded while they were listening to stories and numbers being read.

•  The team hand over the signal data to the neural network for analysis which later structured audio files.

•  The audio files were precisely recognized by participant listeners 75 percent of the times.

Second Study

•  The second study was conducted by the University of Bremen (Germany), Maastricht University (Netherlands), Northwestern University (Illinois), and Virginia Commonwealth University (Virginia) collectively.

•  The brain signals were collected from six patients' speech planning and motor areas while undergoing tumor surgeries.

•  Every patient read specific words out loud aiming to the data collected.

•  Post the brain data and audio data were given to their neural network for interpretation, the program was given brain signals not included in the training set to recreate audio, the result-producing words that were recognizable 40 percent of the time.

Third Study

•  The third study was conducted by the University of California, San Francisco.

•  The three participants with epilepsy were asked to read text aloud while brain activity was captured from the speech and motor areas of their brains.

•  The speech generated following their neural network's interpretation of the signal readings was presented to a group of 166 people who were asked to identify the sentences from a multiple-choice test.

•  Out of which, some sentences were identified with 80 percent precision.

Slightly Obscured Future with Challenges ahead

•  Although the above outcomes compel us to see only the positives of the aspect yet there are few hurdles in the progress towards human brains connection with computers.

•  Neural networks need to be trained on each individual person because translation made neuron signal patterns in the brain varies from person to person.

•  As the algorithm is excessively data-driven, the best results need the best data possible, which in other words means, more precise neuron signals should be obtained. This can only happen by placing electrodes in the brain itself.

•  The data collection at a large level for research is restricted as currently, it relies on voluntary participation and approval of analysis.

•  Unlike the voluntary participants for the above research, patients who are unable to speak can contribute to the level of difficulty in analyzing the brain's speech signals.

•  The matter is likely to become more messed up when it will come to measure the difference between brain signals during actual speech vs. thinking about speech.a

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net