Human auditory neurons more sensitive than those of other mammals

January 16, 2008

The human ear is exquisitely tuned to discern different sound frequencies, whether such tones are high or low, near or far. But the ability of our ears pales in comparison to the remarkable knack of single neurons in our brains to distinguish between the very subtlest of frequency differences.

Reporting in the Jan. 10 issue of the journal Nature, Dr. Itzhak Fried, professor of neurosurgery and director of the UCLA Epilepsy Surgery Program, and colleagues from Hebrew University and the Weizmann Institute of Science in Israel, show that in humans, a single auditory neuron in the brain exhibits an amazing selectivity to a very narrow sound-frequency range, roughly down to a tenth of an octave.

In fact, the ability of such neurons to detect the slightest of differences in sound frequency far surpasses that of the human auditory nerve, which carries information from the hair cells of the inner ear to the brain's auditory cortex — by as much as 30 times greater sensitivity. Indeed, such frequency tuning in the human auditory cortex is substantially superior to that typically found in the cortex of nonhuman mammals, with the exception of bats.

It is a paradox, the researchers note, that even the auditory neurons of musically untrained people can detect very small differences in frequency much better than their peripheral auditory nerve. With other peripheral nerves, such as those in the skin, the human ability to detect differences between two points — say from the prick of a needle — is limited by the receptors in the skin; the neurons associated with those peripheral nerves display no greater sensitivity. With hearing, however, the sensitivity of the neuron actually exceeds that of the peripheral nerve.

The researchers, including senior author Israel Nelken and first author Yael Bitterman from Hebrew University, determined how neurons in the human auditory cortex responded to various sounds by taking recordings of brain activity from four consenting clinical patients at UCLA Medical Center. These patients had intractable epilepsy and were being monitored with intracranial depth electrodes to identify the focal point of their seizures for potential surgical treatment.

Using clinical criteria, electrodes were implanted bilaterally at various brain sites that were suspected to be involved in the seizures, including the auditory cortex. The recording of brain activity was carried out while patients listened to artificial random chords at different tones per octave and to segments from the film "The Good, the Bad and the Ugly.'' Thus, the sounds the patients heard were both artificial (the random chords) and more natural (the voices and noise from the movie soundtrack).

The results surprised the researchers. A single auditory neuron from humans showed an amazing ability to distinguish between very subtle frequency differences, down to a tenth of an octave. This, compared to a sensitivity of about one octave in the cat, about a third of an octave in rats and a half to a full octave in the macaque.

"This is remarkable selectivity," said Fried, who is also co-director of UCLA's Seizure Disorder Center. "It is indeed a mystery why such resolution in humans came to be. Why did we develop this? Such selectivity is not needed for speech comprehension, but it may have a role in musical skill. The 3 percent frequency differences that can be detected by single neurons may explain the fact that even musically untrained people can detect such frequency differences.

"There is also evidence that frequency discrimination in humans correlates with various cognitive skills, including working memory and the capability to learn, but more research is needed to clarify this puzzle," he said.

This study, Fried noted, is the latest example of the power of neurobiological research that uses data drawn directly from inside a living human brain at the single-neuron level. Previous studies from Fried's lab have identified single cells in the human hippocampus specific to place in human navigation, and single cells that can translate varied visual images of the same item — such as the identity of an individual — into a single concept that is instantly and consistently recognizable.

Source: UCLA

Explore further: Selecting sounds: How the brain knows what to listen to

Related Stories

Selecting sounds: How the brain knows what to listen to

December 11, 2017
How is it that we are able—without any noticeable effort—to listen to a friend talk in a crowded café or follow the melody of a violin within an orchestra?

Pitch imperfect? How the brain decodes pitch may improve cochlear implants

November 22, 2017
Picture yourself with a friend in a crowded restaurant. The din of other diners, the clattering of dishes, the muffled notes of background music, the voice of your friend, not to mention your own – all compete for your ...

Genes critical for hearing identified

October 12, 2017
Fifty-two previously unidentified genes that are critical for hearing have been found by testing over 3,000 mouse genes. The newly discovered genes will provide insights into the causes of hearing loss in humans, say scientists ...

New functions of hippocampus unveiled to bring insights to causes and treatments of brain diseases

September 29, 2017
A research team led by Lam Woo Professor of Biomedical Engineering Ed X. Wu of the Department of Electrical and Electronic Engineering at the University of Hong Kong has made major breakthrough in unveiling the mysteries ...

That music playing in your head is a real conundrum for scientists

November 10, 2017
Researchers at EPFL can now see what happens in our brains when we hear music in our heads. The researchers hope that in time their findings will be used to help people who have lost the ability to speak.

Researchers identify components of speech recognition pathway in humans

June 22, 2011
Neuroscientists at Georgetown University Medical Center (GUMC) have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech — and discovered that they ...

Recommended for you

Drug found that induces apoptosis in myofibroblasts reducing fibrosis in scleroderma

December 15, 2017
(Medical Xpress)—An international team of researchers has found that the drug navitoclax can induce apoptosis (self-destruction) in myofibroblasts in mice, reducing the spread of fibrosis in scleroderma. In their paper ...

How defeating THOR could bring a hammer down on cancer

December 14, 2017
It turns out Thor, the Norse god of thunder and the Marvel superhero, has special powers when it comes to cancer too.

Researchers track muscle stem cell dynamics in response to injury and aging

December 14, 2017
A new study led by researchers at Sanford Burnham Prebys Medical Discovery Institute (SBP) describes the biology behind why muscle stem cells respond differently to aging or injury. The findings, published in Cell Stem Cell, ...

'Human chronobiome' study informs timing of drug delivery, precision medicine approaches

December 13, 2017
Symptoms and efficacy of medications—and indeed, many aspects of the human body itself—vary by time of day. Physicians tell patients to take their statins at bedtime because the related liver enzymes are more active during ...

Study confirms link between the number of older brothers and increased odds of being homosexual

December 12, 2017
Groundbreaking research led by a team from Brock University has further confirmed that sexual orientation for men is likely determined in the womb.

Potassium is critical to circadian rhythms in human red blood cells

December 12, 2017
An innovative new study from the University of Surrey and Cambridge's MRC Laboratory of Molecular Biology, published in the prestigious journal Nature Communications, has uncovered the secrets of the circadian rhythms in ...

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

KB6
not rated yet Jan 17, 2008
"Such selectivity is not needed for speech comprehension..."
---
It might not be needed to just comprehend words. But even very subtle tonal differences in speech can change the entire meaning and emotional content of what is being said, like:
"You aren't going to wear THAT to the party."
"YOU aren't going to wear that to the party."
barakn
1 / 5 (1) Feb 21, 2008
And it would be very much necessary for tonal languages like the various Chinese languages.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.