Look at what I'm saying: Engineers show brain depends on vision to hear

September 4, 2013, University of Utah
University of Utah bioengineers measured electrical signals on the brain surface of four adults with epilepsy to understand how the brain processes speech. Their findings show our understanding of language may depend more heavily on vision than previously thought. Credit: Department of Neurosurgery, University of Utah

University of Utah bioengineers discovered our understanding of language may depend more heavily on vision than previously thought: under the right conditions, what you see can override what you hear. These findings suggest artificial hearing devices and speech-recognition software could benefit from a camera, not just a microphone.

"For the first time, we were able to link the auditory signal in the brain to what a person said they heard when what they actually heard was something different. We found vision is influencing the hearing part of the brain to change your perception of reality—and you can't turn off the illusion," says the new study's first author, Elliot Smith, a bioengineering and neuroscience graduate student at the University of Utah. "People think there is this tight coupling between in the world around us and what we experience subjectively, and that is not the case."

The brain considers both sight and sound when processing speech. However, if the two are slightly different, visual cues dominate sound. This phenomenon is named the McGurk effect for Scottish Harry McGurk, who pioneered studies on the link between hearing and vision in in the 1970s. The McGurk effect has been observed for decades. However, its origin has been elusive.

In the new study, which appears today in the journal PLOS ONE, the University of Utah team pinpointed the source of the McGurk effect by recording and analyzing in the temporal cortex, the region of the brain that typically processes sound.

Working with University of Utah Bradley Greger and Paul House, Smith recorded from the brain surfaces of four severely epileptic adults (two male, two female) from Utah and Idaho. House placed three button-sized electrodes on the left, right or both of each test subject, depending on where each patient's seizures were thought to originate. The experiment was done on volunteers with severe epilepsy who were undergoing surgery to treat their epilepsy.

These four test subjects were then asked to watch and listen to videos focused on a person's mouth as they said the syllables "ba," "va," "ga" and "tha." Depending on which of three different videos were being watched, the patients had one of three possible experiences as they watched the syllables being mouthed:

  • The motion of the mouth matched the sound. For example, the video showed "ba" and the audio sound also was "ba," so the patients saw and heard "ba."
  • The motion of the mouth obviously did not match the corresponding sound, like a badly dubbed movie. For example, the video showed "ga" but the audio was "tha," so the patients perceived this disconnect and correctly heard "tha."
  • The motion of the mouth only was mismatched slightly with the corresponding sound. For example, the video showed "ba" but the audio was "va," and patients heard "ba" even though the sound really was "va." This demonstrates the McGurk effect—vision overriding hearing.

By measuring the electrical signals in the brain while each video was being watched, Smith and Greger could pinpoint whether auditory or visual brain signals were being used to identify the syllable in each video. When the syllable being mouthed matched the sound or didn't match at all, brain activity increased in correlation to the sound being watched. However, when the McGurk effect video was viewed, the activity pattern changed to resemble what the person saw, not what they heard. Statistical analyses confirmed the effect in all test subjects.

"We've shown neural signals in the brain that should be driven by sound are being overridden by that say, 'Hear this!'" says Greger. "Your brain is essentially ignoring the physics of sound in the ear and following what's happening through your vision."

Greger was senior author of the study as an assistant professor of bioengineering at the University of Utah. He recently took a faculty position at Arizona State University.

The new findings could help researchers understand what drives language processing in humans, especially in a developing infant trying to connect sounds and lip movement to learn language. These findings also may help researchers sort out how language processing goes wrong when visual and auditory inputs are not integrated correctly, such as in dyslexia, Greger says.

Explore further: Why we look at the puppet, not the ventriloquist

Related Stories

Why we look at the puppet, not the ventriloquist

August 30, 2013
(Medical Xpress)—As ventriloquists have long known, your eyes can sometimes tell your brain where a sound is coming from more convincingly than your ears can.

Single tone alerts brain to complete sound pattern

September 3, 2013
The processing of sound in the brain is more advanced than previously thought. When we hear a tone, our brain temporarily strengthens that tone but also any tones separated from it by one or more octaves. A research team ...

Inner speech speaks volumes about the brain

July 16, 2013
Whether you're reading the paper or thinking through your schedule for the day, chances are that you're hearing yourself speak even if you're not saying words out loud. This internal speech—the monologue you "hear" inside ...

Study shows how bilinguals switch between languages

May 20, 2013
(Medical Xpress)—Individuals who learn two languages at an early age seem to switch back and forth between separate "sound systems" for each language, according to new research conducted at the University of Arizona.

In a noisy environment, lip-reading can help us to better understand the person we are speaking to

November 13, 2012
(Medical Xpress)—In a noisy environment, lip-reading can aid understanding of a conversation. Researchers at the Max Planck Institute for Human Cognitive and Brain Sciences who have been investigating this phenomenon are ...

Recommended for you

New neuron-like cells allow investigation into synthesis of vital cellular components

January 22, 2018
Neuron-like cells created from a readily available cell line have allowed researchers to investigate how the human brain makes a metabolic building block essential for the survival of all living organisms. A team led by researchers ...

Finding unravels nature of cognitive inflexibility in fragile X syndrome

January 22, 2018
Mice with the genetic defect that causes fragile X syndrome (FXS) learn and remember normally, but show an inability to learn new information that contradicts what they initially learned, shows a new study by a team of neuroscientists. ...

Epilepsy linked to brain volume and thickness differences

January 22, 2018
Epilepsy is associated with thickness and volume differences in the grey matter of several brain regions, according to new research led by UCL and the Keck School of Medicine of USC.

Research reveals atomic-level changes in ALS-linked protein

January 18, 2018
For the first time, researchers have described atom-by-atom changes in a family of proteins linked to amyotrophic lateral sclerosis (ALS), a group of brain disorders known as frontotemporal dementia and degenerative diseases ...

Fragile X finding shows normal neurons that interact poorly

January 18, 2018
Neurons in mice afflicted with the genetic defect that causes Fragile X syndrome (FXS) appear similar to those in healthy mice, but these neurons fail to interact normally, resulting in the long-known cognitive impairments, ...

How your brain remembers what you had for dinner last night

January 17, 2018
Confirming earlier computational models, researchers at University of California San Diego and UC San Diego School of Medicine, with colleagues in Arizona and Louisiana, report that episodic memories are encoded in the hippocampus ...

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

mauro48it
not rated yet Sep 05, 2013
It is true, as I have personally experienced this, I am an Italian, and when I look of the TV drama series that are not of Italian origin, but dubbed and the environment is noisy, I verified that the conversation is more understandable without looking at the video.
beleg
not rated yet Sep 08, 2013
"...visual cues dominate sound..."
You hear what you read.
What do subjects born deaf hear when reading?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.