Computer algorithms reveal how the brain processes and perceives sound in noisy environments

July 19, 2013, National Science Foundation

Every day the human brain encounters a cacophony of sounds, often all simultaneously competing for its attention.

In the cafeteria, at the mall, in the street, at the playground, even at home, the noises are everywhere. People talk. Music plays. Babies cry. Cars drive by with their windows open and radios on. Fire engine sirens scream. Dogs bark. Lawn mowers roar. The list is endless.

With all of this going on, how does the brain sort out what is important, and what it needs to hear?

That's what Mounya Elhilali is trying to find out.

"We want to understand how the brain processes and perceives sounds in , when there are lots of background sounds," says Elhilali, an assistant professor of electrical and at the center for language and at the Johns Hopkins University. "The ultimate goal is to learn how the brain adapts to different acoustic environments."

Elhilali, who works at the interface of and engineering, hopes to use what she learns from basic research to design better products that will enhance communication.

Two processes, known as "bottom-up" and "top down," occur in the brain when it is exposed to a wide range of sounds, Elhilali says. Hearing many different sounds in a room is "bottom-up," that is, "driven by the sounds around you," she says, while zeroing in on a particular sound, such as a conversation, is "top-down," that is, "controlled by your state of mind," she says, adding: "We are trying to understand the interaction between these two processes."

She and her research colleagues will take this information and construct mathematical models——of how the brain perceives sounds in a complex environment "and build better machines—computers— that can hear like a can hear," she says. "We are trying to build a computer brain that can process sounds in noisy environments like the human brain, so you can talk to the computer and it can understand what you are saying."

Ultimately, the research has the potential to benefit numerous engineering and communication applications, in cell phones, for example, within industry, on the battlefield, in designing new and better hearing aids, and in improving telephone automated systems that many consumers currently find both annoying and frustrating.

"The knowledge generated from our work should be able to target those robotic systems," she says. "The reason they don't work now is because they were not designed to deal with complex and unknown environments. They were designed to work in a controlled laboratory, so the system just falls apart."

The research ultimately aims to improve communication among humans or between humans and machines. "We are hoping to learn from the biology to design better systems," she says. "Performance of hearing systems and speech technologies can benefit greatly from a deeper appreciation and knowledge of how the brain processes and perceives sounds."

Elhilali is conducting her research under a National Science Foundation (NSF) Faculty Early Career Development (CAREER) award, which she received in 2009 as part of NSF's American Recovery and Reinvestment Act. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education, and the integration of education and research within the context of the mission of their organization. NSF is funding her work with about $550,000 over five years.

Typically, "laboratory sounds are stripped down versions of the sounds that we hear in everyday life," Elhilali says. "To better understand the complex sound environments that surround us, we need to test our listeners in a variety of conditions. We bring people to a sound booth and play them various sounds that simulate different listening environments. We ask them to pay attention to different sound elements and report back what they hear, or simply tell us if they hear something unusual."

For example, when listening to a conversation among a group of males, the introduction of a female voice, or a melody, will attract attention, and listeners often indicate this as an expected event, she says. "We hope to learn more about how the listeners' different expectations bias how they hear sounds and how they interpret the world surrounding them," she says.

Eventually, researchers intend to directly measure brain activity using electroencephalography (EEG), a device that detects electrical activity along the scalp, "and tells us how the brain is processing sounds," she says. "Different parts of the brain communicate by sending electrical signals, and we are hoping to see how patterns of activity in the change when we change the listening environment from controlled to complex."

With the education component of the grant, Elhilali is developing an interdisciplinary curriculum aimed at encouraging undergraduates to explore both science and engineering and to recognize that the two fields complement one another. "I want students to realize that they can translate their scientific knowledge into practical engineering experience," she says.

Explore further: Inner speech speaks volumes about the brain

Related Stories

Inner speech speaks volumes about the brain

July 16, 2013
Whether you're reading the paper or thinking through your schedule for the day, chances are that you're hearing yourself speak even if you're not saying words out loud. This internal speech—the monologue you "hear" inside ...

'Nuff said: Humans get the gist of complex sounds

June 13, 2013
New research by neuroscientists at UC Berkeley, suggests that the human brain is not detail-oriented, but opts for the big picture when it comes to hearing.

Insights into how brain compensates for recurring hearing loss point to new glue ear therapies

June 27, 2013
Important new insights into how the brain compensates for temporary hearing loss during infancy, such as that commonly experienced by children with glue ear, are revealed in a research study in ferrets. The Wellcome Trust-funded ...

Brain and Mind Institute-led research team wins coveted Human Brain Mapping Hackathon

July 1, 2013
A team of researchers fueled predominantly by Western's Brain and Mind Institute won a top prize in the Hackathon this past week at the 19th annual meeting of the Organization for Human Brain Mapping in Seattle, Washington.

What you hear could depend on what your hands are doing

October 14, 2012
New research links motor skills and perception, specifically as it relates to a second finding—a new understanding of what the left and right brain hemispheres "hear." Georgetown University Medical Center researchers say ...

Recommended for you

Research reveals atomic-level changes in ALS-linked protein

January 18, 2018
For the first time, researchers have described atom-by-atom changes in a family of proteins linked to amyotrophic lateral sclerosis (ALS), a group of brain disorders known as frontotemporal dementia and degenerative diseases ...

Fragile X finding shows normal neurons that interact poorly

January 18, 2018
Neurons in mice afflicted with the genetic defect that causes Fragile X syndrome (FXS) appear similar to those in healthy mice, but these neurons fail to interact normally, resulting in the long-known cognitive impairments, ...

How your brain remembers what you had for dinner last night

January 17, 2018
Confirming earlier computational models, researchers at University of California San Diego and UC San Diego School of Medicine, with colleagues in Arizona and Louisiana, report that episodic memories are encoded in the hippocampus ...

Recording a thought's fleeting trip through the brain

January 17, 2018
University of California, Berkeley neuroscientists have tracked the progress of a thought through the brain, showing clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response ...

Midbrain 'start neurons' control whether we walk or run

January 17, 2018
Locomotion comprises the most fundamental movements we perform. It is a complex sequence from initiating the first step, to stopping when we reach our goal. At the same time, locomotion is executed at different speeds to ...

Neuroscientists suggest a model for how we gain volitional control of what we hold in our minds

January 16, 2018
Working memory is a sort of "mental sketchpad" that allows you to accomplish everyday tasks such as calling in your hungry family's takeout order and finding the bathroom you were just told "will be the third door on the ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.