Healthy ears hear the first sound, ignoring the echoes

August 26, 2010
University of Oregon neuroscientists Terry Takahashi, left, and Brian Nelson, studying hearing in barn owls, have determined that the processing of sounds, such as a person's voice, is basically simple: We hear the direct sound and "ignore" the echoes. Credit: Photo by Jim Barlow

Voices carry, reflect off objects and create echoes. Most people rarely hear the echoes; instead they only process the first sound received. For the hard of hearing, though, being in an acoustically challenging room can be a problem. For them, echoes carry. Ever listen to a lecture recorded in a large room?

That most people only process the first-arriving sound is not new. Physicist Joseph Henry, the first secretary of the Smithsonian Institution, noted it in 1849, dubbing it the precedence effect. Since then, classrooms, lecture halls and public-gathering places have been designed to reduce reverberating sounds. And scientists have been trying to identify a precise neural mechanism that shuts down trailing echoes.

In a new paper published in the Aug. 26 issue of the journal Neuron, University of Oregon scientists Brian S. Nelson, a postdoctoral researcher, and Terry T. Takahashi, professor of biology and member of the UO Institute of Neuroscience, suggest that the filtering process is really simple.

When a sound reaching the ear is loud enough, auditory neurons simply accept that sound and ignore subsequent reverberations, Takahashi said. "If someone were to call out your name from behind you, that caller's voice would reach your ears directly from his or her mouth, but those will also bounce off your computer monitor and arrive at your ears a little later and get mixed in with the direct sound. You aren't even aware of the echo."

Takahashi studies hearing in barn owls with the goal of understanding the fundamentals of sound processing so that future , for example, might be developed. In studying how his owls hear, he usually relies on clicking sounds one at a time.

For the new study, funded by the National Institutes of and Communication Disorders, Nelson said: "We studied longer sounds, comparable in duration to many of the consonant sounds in human . As in previous studies, we showed that the sound that arrives first -- the direct sound -- evokes a neural and behavioral response that is similar to a single source. What makes our new study interesting is that the neural response to the reflection was not decreased in comparison to when two different sounds were presented."

The owls were subjected to two distinct sounds, direct and reflected, with the first-arriving sound causing neurons to discharge. "The owls' auditory are very responsive to the leading edge of the peaks," said Takahashi, "and those leading edges in the echo are masked by the peak in the direct waveform that preceded it. The auditory cells therefore can't respond to the echo."

When the leading is not deep enough in modulation and more time passes between sounds, the single filtering process disappears and the owls respond to the sounds coming from different locations, the researchers noted.

The significance, Takahashi said, is that for more than 60 years researchers have sought a physiological mechanism that actively suppresses echoes. "Our results suggest that you might not need such a sophisticated system."

Related Stories

Recommended for you

'Selfish brain' wins out when competing with muscle power, study finds

October 20, 2017
Human brains are expensive - metabolically speaking. It takes lot of energy to run our sophisticated grey matter, and that comes at an evolutionary cost.

Researchers find shifting relationship between flexibility, modularity in the brain

October 19, 2017
A new study by Rice University researchers takes a step toward what they see as key to the advance of neuroscience: a better understanding of the relationship between the brain's flexibility and its modularity.

Brain training can improve our understanding of speech in noisy places

October 19, 2017
For many people with hearing challenges, trying to follow a conversation in a crowded restaurant or other noisy venue is a major struggle, even with hearing aids. Now researchers reporting in Current Biology on October 19th ...

Investigating the most common genetic contributor to Parkinson's disease

October 19, 2017
LRRK2 gene mutations are the most common genetic cause of Parkinson's disease (PD), but the normal physiological role of this gene in the brain remains unclear. In a paper published in Neuron, Brigham and Women's Hospital ...

Brain takes seconds to switch modes during tasks

October 19, 2017
The brain rapidly switches between operational modes in response to tasks and what is replayed can predict how well a task will be completed, according to a new UCL study in rats.

Want to control your dreams? Here's how

October 19, 2017
New research at the University of Adelaide has found that a specific combination of techniques will increase people's chances of having lucid dreams, in which the dreamer is aware they're dreaming while it's still happening ...


Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Aug 28, 2010
If this theory is correct then it would make no difference if the sound was live in a room or if it was recorded in the lively room and then played back, say on headphones or in an acoustically dead room (so that no additional echoes are added to the original.

But this is not so. No matter how high the fidelity of the reproduction, echoes are heard on the reproduced sound but not the original live sound indicating that amplitude alone is an insufficient indicator of echo and is not sufficient for echo suppression.

It is also noteworthy that those hard of hearing have hearing aids which include dynamic compression of the audio signal making loud and soft sound less differentiated, thus only those not wearing hearing aids should be considered.

A more lucid approach will note that people only have to make a sound themselves, eg speak, for the nature of a rooms acoustics to be known and compensated for.
not rated yet Aug 28, 2010
In other words an acoustic per-room compensation can occur providing a person can gain sufficient feedback from self generated acoustic sources.

That is why you can carry on a conversation as you walk from an acoustically dead lounge room to a highly reflective bathroom, providing you have experience with those rooms. People will note that on first exposure to an unfamiliar reflective or dead room the sound will appear to be lively or deader than dead depending on what was anticipated.

We tend to process the otherwise unused echo into very simple echo-location that gives the impression of liveliness or deadness of sound or, more fundamentally, gives an impression of the size of the space we are in. That is how you could get an idea of the size of a cave you are in if you entered in pitch dark. Thus echo sounds are not, as the article claims, 'suppressed', but instead are redirected to areas of the brain that can synthesize spatial information from the echo information.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.