Audio processing: Following the brain's lead

November 6, 2013, Agency for Science, Technology and Research (A*STAR), Singapore
Audio processing: Following the brain's lead
A brain-based pattern-recognition process that searches for familiar features in the audio spectrum improves sound recognition in computers. Credit: A*STAR Institute for Infocomm Research

When designed to process sound based on familiar patterns, sound recognition by computers becomes more robust.

Computers, machines and even smart phones can process sounds and audio signals with apparent ease, but they all require significant . Researchers from the A*STAR Institute for Infocomm Research in Singapore have proposed a way to improve computer audio processing by applying lessons inspired from the way the brain processes sounds1.

"The method proposed in our study may not only contribute to a better understanding of the mechanisms by which the biological acoustic systems operate, but also enhance both the effectiveness and efficiency of audio processing," comments Huajin Tang, an electrical engineer from the research team.

When listening to someone speaking in a quiet room, it is easy to identify the speaker and understand their words. While the same words spoken in a loud bar are more difficult to process, our brain is still capable of distinguishing the voice of the speaker from the background noise. Computers, on the other hand, still have considerable problems identifying complex sounds from a noisy background; even must send to a powerful centralized server for processing.

Considerable computing power at the server is required because the computer continuously processes the entire spectrum of human audio frequencies. The brain, however, analyzes information more selectively: it processes audio patterns localized in time and frequency (see image). When someone speaks with a deep voice, for example, the brain dispenses with analyzing high-pitched sounds. So when a speaker in a loud bar stops talking, the brain stops trying to catch and process the sounds that form his words.

Tang and his team emulated the brain's sound-recognition strategy by identifying key points in the audio spectrum of a sound. These points could be characteristic frequencies in a voice or repeating patterns, such as those of an alarm bell. They analyzed the signal in more detail around these key points only, looking for familiar audio frequencies as well as time patterns. This analysis enabled a robust extraction of matching signals when a noise was present. To improve the detection over time, the researchers fed matching frequency patterns into a neurological algorithm that mimics the way the brain learns through the repetition of known patterns.

In computer experiments, the algorithm successfully processed known target signals, even in the presence of noise. Expanding this approach, says Tang, "could lead to a greater understanding of the way the processes sound; and, beyond that, it could also include touch, vision and other senses."

The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research

Explore further: Why we look at the puppet, not the ventriloquist

More information: Dennis, J., Yu, Q., Tang, H., Tran, H. D. & Li, H. Temporal coding of local spectrogram features for robust sound recognition. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 26–31 May 2013. ieeexplore.ieee.org/xpl/articl … _Number%3A6637585%29

Related Stories

Why we look at the puppet, not the ventriloquist

August 30, 2013
(Medical Xpress)—As ventriloquists have long known, your eyes can sometimes tell your brain where a sound is coming from more convincingly than your ears can.

Hearing brainwaves: Epilepsy EEG sonified

July 16, 2013
A simple method of converting the brain wave signals of people living with epilepsy into sound has been developed by a team of researchers at the University of Sydney.

Brain picks out salient sounds from background noise by tracking frequency and time

July 23, 2013
New research reveals how our brains are able to pick out important sounds from the noisy world around us. The findings, published online today in the journal 'eLife', could lead to new diagnostic tests for hearing disorders.

Computer algorithms reveal how the brain processes and perceives sound in noisy environments

July 19, 2013
Every day the human brain encounters a cacophony of sounds, often all simultaneously competing for its attention.

Recommended for you

Research reveals atomic-level changes in ALS-linked protein

January 18, 2018
For the first time, researchers have described atom-by-atom changes in a family of proteins linked to amyotrophic lateral sclerosis (ALS), a group of brain disorders known as frontotemporal dementia and degenerative diseases ...

Fragile X finding shows normal neurons that interact poorly

January 18, 2018
Neurons in mice afflicted with the genetic defect that causes Fragile X syndrome (FXS) appear similar to those in healthy mice, but these neurons fail to interact normally, resulting in the long-known cognitive impairments, ...

How your brain remembers what you had for dinner last night

January 17, 2018
Confirming earlier computational models, researchers at University of California San Diego and UC San Diego School of Medicine, with colleagues in Arizona and Louisiana, report that episodic memories are encoded in the hippocampus ...

Recording a thought's fleeting trip through the brain

January 17, 2018
University of California, Berkeley neuroscientists have tracked the progress of a thought through the brain, showing clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response ...

Midbrain 'start neurons' control whether we walk or run

January 17, 2018
Locomotion comprises the most fundamental movements we perform. It is a complex sequence from initiating the first step, to stopping when we reach our goal. At the same time, locomotion is executed at different speeds to ...

A 'touching sight': How babies' brains process touch builds foundations for learning

January 16, 2018
Touch is the first of the five senses to develop, yet scientists know far less about the baby's brain response to touch than to, say, the sight of mom's face, or the sound of her voice.

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

RobertKarlStonjek
1 / 5 (1) Nov 07, 2013
What is missing here is the replacement of noisy signals with clearer ones. Upon an audio cue, the brain replaces the sound with the words expected or anticipated.

In this way, if the brain makes a mistake we *clearly hear* the erroneous word rather than the ambiguous audio.

This can be induced by priming an individual with information about what is expected to be heard. For instance 'haven' will readily be replaced with 'heaven' if we are primed to hear religious talk.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.