Theory: Music underlies language acquisition

September 18, 2012 by B. J. Almond

(Medical Xpress)—Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University's Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

"Spoken language is a special type of ," said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. "Language is typically viewed as fundamental to , and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music."

Brandt, associate professor of composition and theory at the Shepherd School, co-authored the paper with Shepherd School graduate student Molly Gebrian and L. Robert Slevc, UMCP assistant professor of psychology and director of the Language and Music Cognition Lab.

"Infants listen first to sounds of language and only later to its meaning," Brandt said. He noted that newborns' extensive abilities in different aspects of depend on the discrimination of the sounds of language – "the most musical aspects of speech."

The paper cites various studies that show what the newborn brain is capable of, such as the ability to distinguish the phonemes, or basic distinctive units of , and such attributes as pitch, rhythm and timbre.

The authors define music as "creative play with sound." They said the term "music" implies an attention to the acoustic features of sound irrespective of any referential function. As adults, people focus primarily on the meaning of speech. But babies begin by hearing language as "an intentional and often repetitive vocal performance," Brandt said. "They listen to it not only for its but also for its rhythmic and phonemic patterns and consistencies. The meaning of words comes later."

Brandt and his co-authors challenge the prevailing view that music cognition matures more slowly than language cognition and is more difficult. "We show that music and language develop along similar time lines," he said.

Infants initially don't distinguish well between their native language and all the languages of the world, Brandt said. Throughout the first year of life, they gradually hone in on their native language. Similarly, infants initially don't distinguish well between their native musical traditions and those of other cultures; they start to hone in on their own musical culture at the same time that they hone in on their native language, he said.

The paper explores many connections between listening to speech and music. For example, recognizing the sound of different consonants requires rapid processing in the temporal lobe of the brain. Similarly, recognizing the timbre of different instruments requires temporal processing at the same speed—a feature of musical hearing that has often been overlooked, Brandt said.

"You can't distinguish between a piano and a trumpet if you can't process what you're hearing at the same speed that you listen for the difference between 'ba' and 'da,'" he said. "In this and many other ways, listening to music and speech overlap." The authors argue that from a musical perspective, speech is a concert of phonemes and syllables.

"While music and language may be cognitively and neurally distinct in adults, we suggest that language is simply a subset of music from a child's view," Brandt said. "We conclude that music merits a central place in our understanding of human development."

Brandt said more research on this topic might lead to a better understanding of why music therapy is helpful for people with reading and speech disorders. People with dyslexia often have problems with the performance of musical rhythm. "A lot of people with language deficits also have musical deficits," Brandt said.

More research could also shed light on rehabilitation for people who have suffered a stroke. "Music helps them reacquire language, because that may be how they acquired in the first place," Brandt said.

Explore further: Dyslexia linked to difficulties in perceiving rhythmic patterns in music

More information: For the full text of the theory paper, visit www.frontiersin.org/Auditory_C … .2012.00327/abstract

Related Stories

Dyslexia linked to difficulties in perceiving rhythmic patterns in music

June 29, 2011
Children with dyslexia often find it difficult to count the number of syllables in spoken words or to determine whether words rhyme. These subtle difficulties are seen across languages with different writing systems and they ...

Recommended for you

Researchers find common psychological traits in group of Italians aged 90 to 101

December 12, 2017
In remote Italian villages nestled between the Mediterranean Sea and mountains lives a group of several hundred citizens over the age of 90. Researchers at the University of Rome La Sapienza and University of California San ...

New therapy can help schizophrenia sufferers re-engage socially

December 11, 2017
A new therapy aimed at helping young people suffering from schizophrenia to reconnect and engage with the world around them has had promising results, according to a new University of Sussex-led study.

Certain books can increase infant learning during shared reading, study shows

December 11, 2017
Parents and pediatricians know that reading to infants is a good thing, but new research shows reading books that clearly name and label people and objects is even better.

Twitter can reveal our shared mood

December 11, 2017
In the largest study of its kind, researchers from the University of Bristol have analysed mood indicators in text from 800 million anonymous messages posted on Twitter. These tweets were found to reflect strong patterns ...

Many different types of anxiety and depression exist, new study finds

December 8, 2017
Five new categories of mental illness that cut across the current more broad diagnoses of anxiety and depression have been identified by researchers in a Stanford-led study.

Study sheds light on the voices in our head

December 8, 2017
New research showing that talking to ourselves in our heads may be the same as speaking our thoughts out loud could help explain why people with mental illnesses such as schizophrenia hear voices.

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

Tausch
not rated yet Sep 18, 2012
Yes. What more do you need to hear?
Your research needs little correction.

Hint:
All sounds has meaning. As you said;
The 'labels' or 'labeling' (also sounds!) are the additional 'meaning' 'mounted'(to the sounds first heard [processed]) later on.

Congratulations to all involve in this research.
Concurring research:

http://medicalxpr...rld.html

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.