The great orchestral work of speech

The brain collects first grammatical information about a word before it compiles information about its sound. Credit: Susanne Schauer

What goes on inside our heads is similar to an orchestra. For Peter Hagoort, Director at the Max Planck Institute for Psycholinguistics, this image is a very apt one for explaining how speech arises in the human brain. "There are different orchestra members and different instruments, all playing in time with each other, and sounding perfect together."

When we speak, we transform our thoughts into a of sounds. When we understand language, exactly the opposite occurs: we deduce an interpretation from the speech sounds we hear. Closely connected regions of the brain – like the Broca's area and Wernicke's area – are involved in both processes, and these form the neurobiological basis of our capacity for language.

The 58-year-old scientist, who has had a strong interest in language and literature since his youth, has been searching for the neurobiological foundations of our communication since the 1990s. Using imaging processes, he observes the brain "in action" and tries to find out how this complex organ controls the way we speak and understand speech.

Making language visible

Hagoort is one of the first researchers to combine psychological theories with neuroscientific methods in his efforts to understand this complex interaction. Because this is not possible without the very latest technology, in 1999, Hagoort established the Nijmegen-based Donders Centre for Cognitive Neuroimaging where an interdisciplinary team of researchers uses state-of-the-art technology, for example MRI and PET scanners, to find out how the brain succeeds in combining functions like memory, speech, observation, attention, feelings and consciousness.

The Dutch scientist is particularly fascinated by the temporal sequence of speech. He discovered, for example, that the brain begins by collecting grammatical information about a word before it compiles information about its sound. This first reliable real-time measurement of in the brain provided researchers with a basis for observing speakers in the act of speaking. They were then able to obtain new insights about why the complex orchestral work of language is impaired, for example, after strokes and in the case of disorders like dyslexia and autism.

"Language is an essential component of human culture, which distinguishes us from other species," says Hagoort. "Young children understand language before they even start to speak. They master complex grammatical structures before they can add 3 and 13. Our is tuned for at a very early stage," stresses Hagoort, referring to research findings. The exact composition of the orchestra in our heads and the nature of the score on which the process of is based are topics which Hagoort continues to research.

add to favorites email to friend print save as pdf

Related Stories

Language learning influenced by genes

Feb 11, 2011

Researchers at the University of Edinburgh have found a gene - called ROBO1 - linked to the mechanism in the brain that helps infants develop speech.

Complex brain landscape controls speech

Sep 21, 2010

Up to now, Broca's region in the brain has been thought to comprise two areas, since it was discovered in 1861, it has been regarded as one of the two regions in the cerebral cortex responsible for language. The conception ...

Recommended for you

At last, hope for ALS patients?

2 hours ago

U of T researchers have found a missing link that helps to explain how ALS, one of the world's most feared diseases, paralyses and ultimately kills its victims. The breakthrough is helping them trace a path to a treatment ...

User comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Tausch
not rated yet Mar 06, 2013
This is how organization is done:
http://medicalxpr...tal.html

No. Our brain is tuned for sound during embryonic development.
Your ears produce sounds continuously. Record them.

Sound has all the structure you will ever need for any meaning, understanding, and/or association that will ever occur during your existence.

Young children understand SOUND before they even start to speak. And before they are born. They master complex FOURIER structures before they can add 3 and 13.

The key is tonotopy. Those are your 'players' Their shapes, the instruments.

begins by collecting grammatical information about a word before it compiles information about its sound.


This is not even wrong. I understand you only state this wihtout research whatever. This is why this article refers to no research.