Theory: Music underlies language acquisition

by B. J. Almond

(Medical Xpress)—Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University's Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

"Spoken language is a special type of ," said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. "Language is typically viewed as fundamental to , and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music."

Brandt, associate professor of composition and theory at the Shepherd School, co-authored the paper with Shepherd School graduate student Molly Gebrian and L. Robert Slevc, UMCP assistant professor of psychology and director of the Language and Music Cognition Lab.

"Infants listen first to sounds of language and only later to its meaning," Brandt said. He noted that newborns' extensive abilities in different aspects of depend on the discrimination of the sounds of language – "the most musical aspects of speech."

The paper cites various studies that show what the newborn brain is capable of, such as the ability to distinguish the phonemes, or basic distinctive units of , and such attributes as pitch, rhythm and timbre.

The authors define music as "creative play with sound." They said the term "music" implies an attention to the acoustic features of sound irrespective of any referential function. As adults, people focus primarily on the meaning of speech. But babies begin by hearing language as "an intentional and often repetitive vocal performance," Brandt said. "They listen to it not only for its but also for its rhythmic and phonemic patterns and consistencies. The meaning of words comes later."

Brandt and his co-authors challenge the prevailing view that music cognition matures more slowly than language cognition and is more difficult. "We show that music and language develop along similar time lines," he said.

Infants initially don't distinguish well between their native language and all the languages of the world, Brandt said. Throughout the first year of life, they gradually hone in on their native language. Similarly, infants initially don't distinguish well between their native musical traditions and those of other cultures; they start to hone in on their own musical culture at the same time that they hone in on their native language, he said.

The paper explores many connections between listening to speech and music. For example, recognizing the sound of different consonants requires rapid processing in the temporal lobe of the brain. Similarly, recognizing the timbre of different instruments requires temporal processing at the same speed—a feature of musical hearing that has often been overlooked, Brandt said.

"You can't distinguish between a piano and a trumpet if you can't process what you're hearing at the same speed that you listen for the difference between 'ba' and 'da,'" he said. "In this and many other ways, listening to music and speech overlap." The authors argue that from a musical perspective, speech is a concert of phonemes and syllables.

"While music and language may be cognitively and neurally distinct in adults, we suggest that language is simply a subset of music from a child's view," Brandt said. "We conclude that music merits a central place in our understanding of human development."

Brandt said more research on this topic might lead to a better understanding of why music therapy is helpful for people with reading and speech disorders. People with dyslexia often have problems with the performance of musical rhythm. "A lot of people with language deficits also have musical deficits," Brandt said.

More research could also shed light on rehabilitation for people who have suffered a stroke. "Music helps them reacquire language, because that may be how they acquired in the first place," Brandt said.

More information: For the full text of the theory paper, visit www.frontiersin.org/Auditory_C… .2012.00327/abstract

Related Stories

Mandarin language is music to the brain

Dec 12, 2006

It’s been shown that the left side of the brain processes language and the right side processes music; but what about a language like Mandarin Chinese, which is musical in nature with wide tonal ranges"

Music therapy fails dyslexics

Apr 08, 2010

There is no link between a lack of musical ability and dyslexia. Moreover, attempts to treat dyslexia with music therapy are unwarranted, according to scientists in Belgium writing in the current issue of the International Jo ...

Recommended for you

Gender disparities in cognition will not diminish

9 hours ago

The study, published in the Proceedings of the National Academy of Science, investigated the extent to which improvements in living conditions and educational opportunities over a person's life affect cognitive abilities and th ...

Facial features are the key to first impressions

9 hours ago

A new study by researchers in the Department of Psychology at the University of York shows that it is possible to accurately predict first impressions using measurements of physical features in everyday images of faces, such ...

User comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Tausch
not rated yet Sep 18, 2012
Yes. What more do you need to hear?
Your research needs little correction.

Hint:
All sounds has meaning. As you said;
The 'labels' or 'labeling' (also sounds!) are the additional 'meaning' 'mounted'(to the sounds first heard [processed]) later on.

Congratulations to all involve in this research.
Concurring research:

http://medicalxpr...rld.html