Brain training can improve our understanding of speech in noisy places

October 19, 2017, Cell Press
Photograph of Daniel Polley in the lab. Credit: John Earle / Mass. Eye and Ear

For many people with hearing challenges, trying to follow a conversation in a crowded restaurant or other noisy venue is a major struggle, even with hearing aids. Now researchers reporting in Current Biology on October 19th have some good news: time spent playing a specially designed, brain-training audiogame could help.

In fact, after playing the game, impaired elderly people correctly made out 25 percent more words in the presence of high levels of background noise. The training provided about three times more benefit than hearing aids alone.

"These findings underscore that understanding in noisy listening conditions is a whole brain activity, and is not strictly governed by the ear," said Daniel Polley of Massachusetts Eye and Ear and Harvard Medical School. "The improvements in speech intelligibility following closed loop audiomotor perceptual training did not arise from an improved signal being transferred from the ear to the brain. Our subjects' hearing, strictly speaking, did not get better." And, yet, their ability to make sense of what they'd heard did.

Those improvements reflect better use of other cognitive resources, including selective auditory attention, Polley explained. In other words, participants were better able to filter out noise and distinguish between a target speaker and background distractions.

The study enrolled 24 older adults, at an average of 70-years-old. All participants had mild to severe hearing loss and had worn for an average of 7 years. Participants were randomly assigned to one of two training groups. Members of both groups were asked to spend 3.5 hours per week for 8 weeks playing a game. One group played a game designed with the intention of improving player's ability to follow conversations. It challenged them to monitor subtle deviations between predicted and actual auditory feedback as they moved their fingertip through a virtual soundscape. As a "placebo" control, the other group played a game that challenged player's auditory working memory and wasn't expected to help with speech intelligibility.

The study was designed so that the 24 participants and the researchers did not know who trained with the audiogame programmed for therapeutic benefit and who trained with a "placebo" game without therapeutic intent. Participants from each group reported equivalent expectations that their speech understanding would be improved.

People in both groups improved on their respective auditory tasks and had comparable expectations for improved speech processing. Despite those expectations, individuals that played the working memory game showed no improvement in their ability to make out words or even improvements on other working memory tasks. The other group showed marked improvements, correctly identifying 25 percent more words in spoken sentences or digit sequences presented in high levels of . Those gains in speech intelligibility could also be predicted based on the accuracy with which those individuals played the game.

Those benefits didn't persist in the absence of continuing practice, the researchers report. However, they say, the findings show that "perceptual learning on a computerized audiogame can transfer to 'real world' communication challenges." Polley envisions a time when hearing challenges might be managed through a combination of auditory training software coupled with the latest in-ear listening devices.

"We look forward to a future where auditory perceptual training software that has been inspired by principles of brain plasticity, not audiological testing, is packaged with new advances in these listening devices," he said. "There is reason to believe that the sum of these benefits would be greater than could be expected from any one approach applied in isolation."

Explore further: Our ability to focus on one voice in crowds is triggered by voice pitch

More information: Current Biology, Whitton et al.: "Audiomotor Perceptual Training Enhances Speech Intelligibility in Background Noise" http://www.cell.com/current-biology/fulltext/S0960-9822(17)31178-8 , DOI: 10.1016/j.cub.2017.09.014

Related Stories

Our ability to focus on one voice in crowds is triggered by voice pitch

October 10, 2017
Scientists have discovered that a group of neurons in the brain's auditory stem help us to tune into specific conversations in a crowded room.

Game technology teaches mice and men to hear better in noisy environments

June 9, 2014
The ability to hear soft speech in a noisy environment is difficult for many and nearly impossible for the 48 million in the United States living with hearing loss. Researchers from the Massachusetts Eye and Ear, Harvard ...

Noisy classroom simulation aids comprehension in hearing-impaired children

February 11, 2013
Children with hearing loss struggle to hear in noisy school classrooms, even with the help of hearing aids and other devices to amplify their teacher's voice. Training the brain to filter out background noise and thus understand ...

Certain OTC, less expensive hearing aids provide benefit similar to conventional hearing aid

July 3, 2017
A comparison between less-expensive, over-the-counter hearing assistance devices and a conventional hearing aid found that some of these devices were associated with improvements in hearing similar to the hearing aid, according ...

Researchers identify part of the brain that compensates for hearing loss in older adults

August 4, 2016
Researchers have pinpointed the specific part of the brain that older adults rely on to differentiate speech sounds in background noise, which could revolutionize the treatment of hearing loss.

Patients with hearing loss benefit from training with loved one's voice

April 19, 2017
Hearing loss often is called the invisible disability, according to Washington University researcher Nancy Tye-Murray. It can masquerade as other problems, from dementia to depression, and it can make those problems worse. ...

Recommended for you

Research reveals atomic-level changes in ALS-linked protein

January 18, 2018
For the first time, researchers have described atom-by-atom changes in a family of proteins linked to amyotrophic lateral sclerosis (ALS), a group of brain disorders known as frontotemporal dementia and degenerative diseases ...

Fragile X finding shows normal neurons that interact poorly

January 18, 2018
Neurons in mice afflicted with the genetic defect that causes Fragile X syndrome (FXS) appear similar to those in healthy mice, but these neurons fail to interact normally, resulting in the long-known cognitive impairments, ...

How your brain remembers what you had for dinner last night

January 17, 2018
Confirming earlier computational models, researchers at University of California San Diego and UC San Diego School of Medicine, with colleagues in Arizona and Louisiana, report that episodic memories are encoded in the hippocampus ...

Recording a thought's fleeting trip through the brain

January 17, 2018
University of California, Berkeley neuroscientists have tracked the progress of a thought through the brain, showing clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response ...

Midbrain 'start neurons' control whether we walk or run

January 17, 2018
Locomotion comprises the most fundamental movements we perform. It is a complex sequence from initiating the first step, to stopping when we reach our goal. At the same time, locomotion is executed at different speeds to ...

Neuroscientists suggest a model for how we gain volitional control of what we hold in our minds

January 16, 2018
Working memory is a sort of "mental sketchpad" that allows you to accomplish everyday tasks such as calling in your hungry family's takeout order and finding the bathroom you were just told "will be the third door on the ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

RobertKarlStonjek
not rated yet Oct 20, 2017
In normal hearing the shape of the fleshy part of the ear changes the frequency response with the height of the sound (in the 5,000 to 10,000Hz range) and attenuates sounds coming from the rear of the person (3~5db), all of which is lost when hearing aids are used, some of which have rear facing microphone pickups which actually INCREASE sensitivity to the rear and there is no sound shaping with the height of the source sound.

The ability to discriminate between sounds is greatly reduced when compression is used and the distinction between loud and soft sounds is compressed making background and extraneous sounds almost as loud as those the listener is straining to hear.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.