Is there a musical method for interpreting speech?

December 7, 2017
Credit: CC0 Public Domain

Cochlear implants have been a common method of correcting sensorineural hearing loss for individuals with damage to their brain, inner ear, or auditory nerves. The implanted devices use an electrode array that is inserted into the cochlea and assists in stimulating auditory nerve fibers. However, the speech patterns heard with the use of a cochlear implant are often spectrally degraded and can be difficult to understand. Vocoded speech, or distorted speech that imitates voice transduction by a cochlear implant, is used throughout acoustic and auditory research to explore speech comprehension under various conditions.

Researchers Kieran E. Laursen, Sara L. Protko and Terry L. Gottfried from Lawrence University, along with collaborators Iain C. Williams and Tahnee Marquardt from the University of North Carolina at Wilmington and the University of Oxford, respectively, will present their work on the effect of on the ability to understand vocoded speech at the 174th Meeting of the Acoustical Society of America, being held Dec. 4-8, 2017, in New Orleans, Louisiana.

Musical ability, described by a person's aptitude for playing an instrument, interpreting sound patterns or recognizing different tones, has long been linked to higher cognitive capacity and better communication skills.

"We are testing to see if someone's musicality or levels of musical experience affects their perceptions of vocoded speech," Laursen said in an email. "So, the question lies in how does music affect one's abilities to hear different pitches, intonations, and rhythms within distorted speech."

"The acoustic information in vocoded speech is quite different from that of natural speech in the presence of noise," said Gottfried. The rhythmic patterns of natural speech are often maintained in vocoded speech, so musicians may have the upper hand at interpretation due to their experience with rhythm production. However, musicians may also fair similarly to nonmusicians due to the loss of information that can result from vocoding.

Gottfried has been researching speech perception and its relation to music since he was in graduate school. "Over the years, I've continued my studies of this relation between speech and music perception, and there's been considerable recent research that suggests musical experience is related not only to improved second language speech perception, but also to improved phonetic perception in one's first language and in better recognition of speech in noise," he said, regarding a study on speech perceptions by nonnative listeners to Mandarin tones.

Using a commercially available program called SuperLab, research participants (both musicians and nonmusicians) were asked to transcribe vocoded sentences and words. They were then assigned to a training method on either vocoded or natural speech and asked to again transcribe vocoded sentences. The initial results showed that musicians had no significant advantage over nonmusicians in interpreting vocoded speech patterns, but this may be due to limited sample variation.

"Both groups scored well above chance on the Musical Ear Test, so it's possible that, if we tested listeners with very poor musical ears, they would also not do so well on the vocoded speech," Gottfried said. He also noted that the results are still useful in assessing the extent to which musical experience may relate to the of degraded speech.

The applications of this research span beyond the understanding of vocoded speech to a variety of acoustical interpretation patterns. Understanding normal speech in a noisy environment is dependent on rhythmic interpretations and is acoustically similar to attempting to understand vocoded . If musical experience improves vocoded understanding, it may also be useful in day to day interpretation in noisy environments.

Explore further: How musical training affects speech processing

More information: Abstract: 4pSC10: "Effect of musical experience on learning to understand vocoded speech," by Kieran E. Laursen, Iain C. Williams, Tahnee Marquardt, Sara L. Prostko and Terry L. Gottfried, Thursday, Dec. 7, 2017, in Studios Foyer in the New Orleans Marriott. asa2017fall.abstractcentral.com/s/u/OBlFa5aSFG8

Related Stories

How musical training affects speech processing

December 5, 2017
Musical training is associated with various cognitive improvements and pervasive plasticity in human brains. Among its merits, musical training is thought to enhance the cognitive and neurobiological foundation of speech ...

Ability to process speech declines with age

October 5, 2016
Researchers have found clues to the causes of age-related hearing loss. The ability to track and understand speech in both quiet and noisy environments deteriorates due in part to speech processing declines in both the midbrain ...

Patient older age not an issue in revision cochlear implantation

January 22, 2015
Older age of a patient does not appear to be an issue when revision cochlear implantation is warranted because of device failure, according to a report published online by JAMA Otolaryngology-Head & Neck Surgery.

Research team finds neurological notes that help identify how we process music

October 26, 2015
New York University researchers have identified how brain rhythms are used to process music, a finding that also shows how our perception of notes and melodies can be used as a method to better understand the auditory system.

Recommended for you

Gene mutation causes low sensitivity to pain

December 13, 2017
A UCL-led research team has identified a rare mutation that causes one family to have unusually low sensitivity to pain.

Activating MSc glutamatergic neurons found to cause mice to eat less

December 13, 2017
(Medical Xpress)—A trio of researchers working at the State University of New York has found that artificially stimulating neurons that exist in the medial septal complex in mouse brains caused test mice to eat less. In ...

Scientists discover blood sample detection method for multiple sclerosis

December 13, 2017
A method for quickly detecting signs of multiple sclerosis has been developed by a University of Huddersfield research team.

LLNL-developed microelectrodes enable automated sorting of neural signals

December 13, 2017
Thin-film microelectrode arrays produced at Lawrence Livermore National Laboratory (LLNL) have enabled development of an automated system to sort brain activity by individual neurons, a technology that could open the door ...

Intermittent fasting found to increase cognitive functions in mice

December 12, 2017
(Medical Xpress)—The Daily Mail spoke with the leader of a team of researchers with the National Institute on Aging in the U.S. and reports that they have found that putting mice on a diet consisting of eating nothing every ...

Discovery deepens understanding of brain's sensory circuitry

December 12, 2017
Because they provide an exemplary physiological model of how the mammalian brain receives sensory information, neural structures called "mouse whisker barrels" have been the subject of study by neuroscientists around the ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.