New study uncovers brain's code for pronouncing vowels

August 21, 2012, University of California, Los Angeles
Brain regions (red) containing neurons that encode vowel articulation

(Medical Xpress) -- Scientists have unraveled how our brain cells encode the pronunciation of individual vowels in speech. The discovery could lead to new technology that verbalizes the unspoken words of people paralyzed by injury or disease.

Scientists at UCLA and the Technion, Israel's Institute of Technology, have unraveled how our encode the pronunciation of individual in speech. Published in the Aug. 21 edition of Nature Communications, the discovery could lead to new technology that verbalizes the unspoken words of people paralyzed by injury or disease.

"We know that brain cells fire in a predictable way before we move our bodies," explained Dr. Itzhak Fried, a professor of neurosurgery at the David Geffen School of Medicine at UCLA. "We hypothesized that neurons would also react differently when we pronounce specific sounds. If so, we may one day be able to decode these unique patterns of activity in the brain and translate them into speech."

Fried and Technion's Ariel Tankus, formerly a in Fried's lab, followed 11 UCLA who had electrodes implanted in their brains to pinpoint the origin of their seizures. The researchers recorded neuron activity as the patients uttered one of five vowels or syllables containing the vowels.

With Technion's Shy Shoham, the team studied how the neurons encoded vowel articulation at both the single-cell and collective level. The scientists found two areas—the superior temporal gyrus and a region in the medial frontal lobe—that housed neurons related to speech and attuned to vowels. The encoding in these sites, however, unfolded very differently.

Neurons in the superior temporal gyrus responded to all vowels, although at different rates of firing. In contrast, neurons that fired exclusively for only one or two vowels were located in the medial frontal region.

"Single in the medial frontal lobe corresponded to the encoding of specific vowels," said Fried. "The neuron would fire only when a particular vowel was spoken, but not other vowels."

At the collective level, neurons' encoding of vowels in the superior temporal gyrus reflected the anatomy that made speech possible–specifically, the tongue's position inside the mouth.

"Once we understand the neuronal code underlying speech, we can work backwards from brain-cell activity to decipher speech," said Fried. "This suggests an exciting possibility for people who are physically unable to speak. In the future, we may be able to construct neuro-prosthetic devices or brain-machine interfaces that decode a person's neuronal firing patterns and enable the person to communicate."

Explore further: Researchers identify components of speech recognition pathway in humans

Related Stories

Researchers identify components of speech recognition pathway in humans

June 22, 2011
Neuroscientists at Georgetown University Medical Center (GUMC) have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech — and discovered that they ...

Recommended for you

How your brain remembers what you had for dinner last night

January 17, 2018
Confirming earlier computational models, researchers at University of California San Diego and UC San Diego School of Medicine, with colleagues in Arizona and Louisiana, report that episodic memories are encoded in the hippocampus ...

Recording a thought's fleeting trip through the brain

January 17, 2018
University of California, Berkeley neuroscientists have tracked the progress of a thought through the brain, showing clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response ...

Midbrain 'start neurons' control whether we walk or run

January 17, 2018
Locomotion comprises the most fundamental movements we perform. It is a complex sequence from initiating the first step, to stopping when we reach our goal. At the same time, locomotion is executed at different speeds to ...

A 'touching sight': How babies' brains process touch builds foundations for learning

January 16, 2018
Touch is the first of the five senses to develop, yet scientists know far less about the baby's brain response to touch than to, say, the sight of mom's face, or the sound of her voice.

Brain zaps may help curb tics of Tourette syndrome

January 16, 2018
Electric zaps can help rewire the brains of Tourette syndrome patients, effectively reducing their uncontrollable vocal and motor tics, a new study shows.

Researchers identify protein involved in cocaine addiction

January 16, 2018
Mount Sinai researchers have identified a protein produced by the immune system—granulocyte-colony stimulating factor (G-CSF)—that could be responsible for the development of cocaine addiction.

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

Tausch
not rated yet Aug 23, 2012
"Once we understand the neuronal code underlying speech, we can work backwards from brain-cell activity to decipher speech," said Fried.


Simplfy your quest:

"Once we understand the neuronal code underlying SOUND, we can work backwards from brain-cell activity to decipher [all] LANGUAGE," said Fried.

Doesn't that sound better?

And now your suggestion:

"This suggests an exciting possibility for people who are physically unable to speak. In the future, we may be able to construct neuro-prosthetic devices or brain-machine interfaces that decode a person's neuronal firing patterns and enable the person to communicate."

Your welcome :)

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.