Decoding brain waves to eavesdrop on what we hear

January 31, 2012

Neuroscientists may one day be able to hear the imagined speech of a patient unable to speak due to stroke or paralysis, according to University of California, Berkeley, researchers.

These scientists have succeeded in decoding in the brain's – the seat of the auditory system – as a person listens to normal conversation. Based on this correlation between sound and , they then were able to predict the words the person had heard solely from the temporal lobe activity.

"This is huge for patients who have damage to their speech mechanisms because of a or Lou Gehrig's disease and can't speak," said co-author Robert Knight, a UC Berkeley professor of psychology and neuroscience. "If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."

"This research is based on sounds a person actually hears, but to use it for reconstructing imagined conversations, these principles would have to apply to someone's internal verbalizations," cautioned first author Brian N. Pasley, a post-doctoral researcher in the center. "There is some evidence that hearing the sound and imagining the sound activate similar areas of the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device."

The video will load shortly.
These are frequency spectrograms of the actual spoken words (top) and the sounds as reconstructed by two separate models based solely on recorded temporal lobe activity in a volunteer subject. The words -- Waldo, structure, doubt and property -- are more or less recognizable, even though the model had never encountered these specific words before. Credit: Brian Pasley, UC Berkeley

In addition to the potential for expanding the communication ability of the severely disabled, he noted, the research also "is telling us a lot about how the brain in normal people represents and processes speech sounds."

Pasley and his colleagues at UC Berkeley, UC San Francisco, University of Maryland and The Johns Hopkins University report their findings Jan. 31 in the open-access journal PLoS Biology.

Help from epilepsy patients

They enlisted the help of people undergoing brain surgery to determine the location of intractable seizures so that the area can be removed in a second surgery. Neurosurgeons typically cut a hole in the skull and safely place electrodes on the brain surface or cortex – in this case, up to 256 electrodes covering the temporal lobe – to record activity over a period of a week to pinpoint the seizures. For this study, 15 neurosurgical patients volunteered to participate.

Pasley visited each person in the hospital to record the brain activity detected by the electrodes as they heard 5-10 minutes of conversation. Pasley used this data to reconstruct and play back the sounds the patients heard. He was able to do this because there is evidence that the brain breaks down sound into its component acoustic frequencies – for example, between a low of about 1 Hertz (cycles per second) to a high of about 8,000 Hertz –that are important for speech sounds.

Pasley tested two different computational models to match spoken sounds to the pattern of activity in the electrodes. The patients then heard a single word, and Pasley used the models to predict the word based on electrode recordings.

"We are looking at which cortical sites are increasing activity at particular acoustic frequencies, and from that, we map back to the sound," Pasley said. He compared the technique to a pianist who knows the sounds of the keys so well that she can look at the keys another pianist is playing in a sound-proof room and "hear" the music, much as Ludwig van Beethoven was able to "hear" his compositions despite being deaf.

The better of the two methods was able to reproduce a sound close enough to the original word for Pasley and his fellow researchers to correctly guess the word.

"We think we would be more accurate with an hour of listening and recording and then repeating the word many times," Pasley said. But because any realistic device would need to accurately identify words heard the first time, he decided to test the models using only a single trial.

"This research is a major step toward understanding what features of speech are represented in the human brain" Knight said. "Brian's analysis can reproduce the sound the patient heard, and you can actually recognize the word, although not at a perfect level."

Knight predicts that this success can be extended to imagined, internal verbalizations, because scientific studies have shown that when people are asked to imagine speaking a word, similar brain regions are activated as when the person actually utters the word.

"With neuroprosthetics, people have shown that it's possible to control movement with brain activity," Knight said. "But that work, while not easy, is relatively simple compared to reconstructing language. This experiment takes that earlier work to a whole new level."

Based on earlier work with ferrets

The current research builds on work by other researchers about how animals encode sounds in the brain's auditory cortex. In fact, some researchers, including the study's coauthors at the University of Maryland, have been able to guess the words ferrets were read by scientists based on recordings from the brain, even though the ferrets were unable to understand the words.

The ultimate goal of the UC Berkeley study was to explore how the human brain encodes speech and determine which aspects of speech are most important for understanding.

"At some point, the has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound," Pasley said. "The big question is, What is the most meaningful unit of speech? A syllable, a phone, a phoneme? We can test these hypotheses using the data we get from these recordings."

Explore further: Researchers identify components of speech recognition pathway in humans

More information: Pasley BN, David SV, Mesgarani N, Flinker A, Shamma SA, et al. (2012) Reconstructing Speech from Human Auditory Cortex. PLoS Biol 10(1): e1001251. doi:10.1371/journal.pbio.1001251

Related Stories

Researchers identify components of speech recognition pathway in humans

June 22, 2011
Neuroscientists at Georgetown University Medical Center (GUMC) have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech — and discovered that they ...

Skilled readers rely on their brain's 'visual dictionary' to recognize words

November 14, 2011
Skilled readers can recognize words at lightning fast speed when they read because the word has been placed in a visual dictionary of sorts, say Georgetown University Medical Center (GUMC) neuroscientists. The visual dictionary ...

Recommended for you

Study reveals breakthrough in decoding brain function

September 25, 2017
If there's a final frontier in understanding the human body, it's definitely not the pinky. It's the brain.

Overturning widely held ideas: Visual attention drawn to meaning, not what stands out

September 25, 2017
Our visual attention is drawn to parts of a scene that have meaning, rather than to those that are salient or "stick out," according to new research from the Center for Mind and Brain at the University of California, Davis. ...

A brain system that builds confidence in what we see, hear and touch

September 25, 2017
A series of experiments at EPFL provide conclusive evidence that the brain uses a single mechanism (supramodality) to estimate confidence in different senses such as audition, touch, or vision. The study is published in the ...

Brain guides body much sooner than previously believed

September 25, 2017
The brain plays an active and essential role much earlier than previously thought, according to new research from Tufts University scientists which shows that long before movement or other behaviors occur, the brain of an ...

The rat race is over: New livestock model for stroke could speed discovery

September 25, 2017
It is well-known in the medical field that the pig brain shares certain physiological and anatomical similarities with the human brain. So similar are the two that researchers at the University of Georgia's Regenerative Bioscience ...

Touching helps build the sexual brain

September 21, 2017
Hormones or sexual experience? Which of these is crucial for the onset of puberty? It seems that when rats are touched on their genitals, their brain changes and puberty accelerates. In a new study publishing September 21 ...

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Robert_Wells
5 / 5 (2) Jan 31, 2012
"There is some evidence that hearing the sound and imagining the sound activate similar areas of the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device."


that quote sums it up, a nice little 'if' is slipped in there. however, im afraid. let me slip in another.
If this science turns out to be sound i'm sure this will get funding and once the technology is even remotely accurate the abuse will begin. at what point will the 5th amendment go out the window in favor of this technology?
sorry im still highly annoyed by the passing of the NDAA.
enigma13x
not rated yet Jan 31, 2012
this kinda scares me a bit how long till its used on the general public any self respecting government would love to have this tech to know what people are thinking to know who their opponents are how to eliminate threats before they happen kill all opposition take over rule the world with an iron fist mmwwwaahhhaaaahhhhaaaaa

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.