Want to listen better? Lend a right ear

December 6, 2017
Displays an example of dichotic digit stimuli presentation, with both 'A' binaural separation tasks (i.e., directed ear) and 'B' binaural integration (i.e., free recall) instructions. Credit: Sacchinelli, Weaver, Wilson and Cannon / Auburn University

Listening is a complicated task. It requires sensitive hearing and the ability to process information into cohesive meaning. Add everyday background noise and constant interruptions by other people, and the ability to comprehend what is heard becomes that much more difficult.

Audiology researchers at Auburn University in Alabama have found that in such demanding environments, both children and adults depend more on their right ear for processing and retaining what they hear.

Danielle Sacchinelli will present this research with her colleagues at the 174th Meeting of the Acoustical Society of America, which will be held in New Orleans, Louisiana, Dec. 4-8.

"The more we know about listening in demanding environments, and listening effort in general, the better diagnostic tools, auditory management (including hearing aids) and auditory training will become," Sacchinelli said.

The research team's work is based on dichotic listening tests, used to diagnose, among other conditions, auditory processing disorders in which the brain has difficulty processing what is heard.

In a standard dichotic test, listeners receive different auditory inputs delivered to each ear simultaneously. The items are usually sentences (e.g., "She wore the red dress"), words or digits. Listeners either pay attention to the items delivered in one ear while dismissing the words in the other (i.e., separation), or are required to repeat all words heard (i.e., integration).

According to the researchers, children understand and remember what is being said much better when they listen with their right ear.

Sounds entering the right ear are processed by the left side of the brain, which controls speech, language development, and portions of memory. Each ear hears separate pieces of , which is then combined during processing throughout the auditory system.

However, young children's auditory systems cannot sort and separate the simultaneous information from both . As a result, they rely heavily on their right ear to capture sounds and language because the pathway is more efficient.

What is less understood is whether this right-ear dominance is maintained through adulthood. To find out, Sacchinelli's research team asked 41 participants ages 19-28 to complete both dichotic separation and integration listening tasks.

With each subsequent test, the researchers increased the number of items by one. They found no significant differences between left and right ear performance at or below an individual's simple memory capacity. However, when the item lists went above an individual's memory span, participants' performance improved an average of 8 percent (some individuals' up to 40 percent) when they focused on their right ear.

"Conventional research shows that right-ear advantage diminishes around age 13, but our results indicate this is related to the demand of the task. Traditional tests include four-to-six pieces of information," said Aurora Weaver, assistant professor at Auburn University and member of the research team. "As we age, we have better control of our attention for processing information as a result of maturation and our experience."

In essence, ear differences in processing abilities are lost on tests using four items because our auditory system can handle more information.

"Cognitive skills, of course, are subject to decline with advance aging, disease, or trauma," Weaver said. "Therefore, we need to better understand the impact of cognitive demands on listening."

Explore further: How musical training affects speech processing

More information: Abstract: 3aPPa3: "Does the right ear advantage persist in mature auditory systems when cognitive demand for processing increases?" by Danielle M. Sacchinelli, Dec. 6, 2017, in Studios Foyer (poster sessions) in the New Orleans Marriott. asa2017fall.abstractcentral.com/s/u/J4DDi4sip_s

Related Stories

How musical training affects speech processing

December 5, 2017
Musical training is associated with various cognitive improvements and pervasive plasticity in human brains. Among its merits, musical training is thought to enhance the cognitive and neurobiological foundation of speech ...

Mu­sic and nat­ive lan­guage in­ter­act in the brain

November 30, 2017
Finnish speakers showed an advantage in auditory duration processing compared to German speakers in a recent doctoral study on auditory processing of sound in people with different linguistic and musical backgrounds. In Finnish ...

Brain training can improve our understanding of speech in noisy places

October 19, 2017
For many people with hearing challenges, trying to follow a conversation in a crowded restaurant or other noisy venue is a major struggle, even with hearing aids. Now researchers reporting in Current Biology on October 19th ...

Our ability to focus on one voice in crowds is triggered by voice pitch

October 10, 2017
Scientists have discovered that a group of neurons in the brain's auditory stem help us to tune into specific conversations in a crowded room.

Gaze direction affects sensitivity to sounds

July 5, 2017
Listening to something while looking in a different direction can slow down reaction times while the brain works harder to suppress distractions, finds a new UCL study.

Recommended for you

Activating MSc glutamatergic neurons found to cause mice to eat less

December 13, 2017
(Medical Xpress)—A trio of researchers working at the State University of New York has found that artificially stimulating neurons that exist in the medial septal complex in mouse brains caused test mice to eat less. In ...

Scientists discover blood sample detection method for multiple sclerosis

December 13, 2017
A method for quickly detecting signs of multiple sclerosis has been developed by a University of Huddersfield research team.

LLNL-developed microelectrodes enable automated sorting of neural signals

December 13, 2017
Thin-film microelectrode arrays produced at Lawrence Livermore National Laboratory (LLNL) have enabled development of an automated system to sort brain activity by individual neurons, a technology that could open the door ...

Discovery deepens understanding of brain's sensory circuitry

December 12, 2017
Because they provide an exemplary physiological model of how the mammalian brain receives sensory information, neural structures called "mouse whisker barrels" have been the subject of study by neuroscientists around the ...

Intermittent fasting found to increase cognitive functions in mice

December 12, 2017
(Medical Xpress)—The Daily Mail spoke with the leader of a team of researchers with the National Institute on Aging in the U.S. and reports that they have found that putting mice on a diet consisting of eating nothing every ...

Neuroscientists show deep brain waves occur more often during navigation and memory formation

December 12, 2017
UCLA neuroscientists are the first to show that rhythmic waves in the brain called theta oscillations happen more often when someone is navigating an unfamiliar environment, and that the more quickly a person moves, the more ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.