Study of jazz players shows common brain circuitry processes music and language

February 19, 2014

The brains of jazz musicians engrossed in spontaneous, improvisational musical conversation showed robust activation of brain areas traditionally associated with spoken language and syntax, which are used to interpret the structure of phrases and sentences. But this musical conversation shut down brain areas linked to semantics—those that process the meaning of spoken language, according to results of a study by Johns Hopkins researchers.

The study used functional magnetic resonance imaging (fMRI) to track the activity of jazz musicians in the act of "trading fours," a process in which musicians participate in spontaneous back and forth instrumental exchanges, usually four bars in duration. The musicians introduce new melodies in response to each other's musical ideas, elaborating and modifying them over the course of a performance.

The results of the study suggest that the brain regions that process syntax aren't limited to spoken , according to Charles Limb, M.D., an associate professor in the Department of Otolaryngology-Head and Neck Surgery at the Johns Hopkins University School of Medicine. Rather, he says, the brain uses the syntactic areas to process communication in general, whether through language or through music.

Limb, who is himself a musician and holds a faculty appointment at the Peabody Conservatory, says the work sheds important new light on the complex relationship between music and language.

"Until now, studies of how the brain processes auditory communication between two individuals have been done only in the context of spoken language," says Limb, the senior author of a report on the work that appears online Feb. 19 in the journal PLOS ONE. "But looking at jazz lets us investigate the neurological basis of interactive, musical communication as it occurs outside of .

"We've shown in this study that there is a fundamental difference between how meaning is processed by the brain for music and language. Specifically, it's syntactic and not semantic processing that is key to this type of musical communication. Meanwhile, conventional notions of semantics may not apply to musical processing by the brain."

To study the response of the brain to improvisational musical conversation between musicians, the Johns Hopkins researchers recruited 11 men aged 25 to 56 who were highly proficient in jazz piano performance. During each 10-minute session of trading fours, one musician lay on his back inside the MRI machine with a plastic piano keyboard resting on his lap while his legs were elevated with a cushion. A pair of mirrors was placed so the musician could look directly up while in the MRI machine and see the placement of his fingers on the keyboard. The keyboard was specially constructed so it did not have metal parts that would be attracted to the large magnet in the fMRI.

The improvisation between the musicians activated areas of the brain linked to syntactic processing for language, called the and posterior superior temporal gyrus. In contrast, the musical exchange deactivated brain structures involved in semantic processing, called the angular gyrus and supramarginal gyrus.

"When two jazz musicians seem lost in thought while trading fours, they aren't simply waiting for their turn to play," Limb says. "Instead, they are using the syntactic areas of their brain to process what they are hearing so they can respond by playing a new series of notes that hasn't previously been composed or practiced."

Explore further: Musical brain-reading sheds light on neural processing of music

Related Stories

Musical brain-reading sheds light on neural processing of music

December 17, 2013
Finnish and Danish researchers have developed a new method that performs decoding, or brain-reading, during continuous listening to real music. Based on recorded brain responses, the method predicts how certain features related ...

This is your brain on Vivaldi and Beatles

August 7, 2013
Listening to music activates large networks in the brain, but different kinds of music are processed differently. A team of researchers from Finland, Denmark and the UK has developed a new method for studying music processing ...

Study of jazz musicians reveals how the brain processes improvisations

April 29, 2011
(Medical Xpress) -- A pianist is playing an unknown melody freely without reading from a musical score. How does the listener’s brain recognise if this melody is improvised or if it is memorized? Researchers at the Max ...

How the brain processes musical hallucinations

January 28, 2014
A woman with an "iPod in her head" has helped scientists at Newcastle University and University College London identify the areas of the brain that are affected when patients experience a rare condition called musical hallucinations.

Early music lessons boost brain development, researchers find

February 12, 2013
If you started piano lessons in grade one, or played the recorder in kindergarten, thank your parents and teachers. Those lessons you dreaded – or loved – helped develop your brain. The younger you started music lessons, ...

Recommended for you

Researchers discover fundamental rules for how the brain controls movement

October 24, 2017
The human brain is a mysterious supercomputer. Billions of neurons buzz within an intricate network that controls our every thought, feeling, and movement. And we've only just begun to understand how it all works.

A little myelin goes a long way to restore nervous system function

October 24, 2017
In the central nervous system of humans and all other mammals, a vital insulating sheath composed of lipids and proteins around nerve fibers helps speed the electrical signals or nerve impulses that direct our bodies to walk, ...

Running on autopilot: Scientists find important new role for 'daydreaming' network

October 23, 2017
A brain network previously associated with daydreaming has been found to play an important role in allowing us to perform tasks on autopilot. Scientists at the University of Cambridge showed that far from being just 'background ...

Rhythm of memory: Inhibited neurons set the tempo for memory processes

October 23, 2017
The more we know about the billions of nerve cells in the brain, the less their interaction appears spontaneous and random. The harmony underlying the processing of memory contents has been revealed by Prof. Dr. Marlene Bartos' ...

Researchers demonstrate 'mind-reading' brain-decoding tech

October 23, 2017
Researchers have demonstrated how to decode what the human brain is seeing by using artificial intelligence to interpret fMRI scans from people watching videos, representing a sort of mind-reading technology.

Research revises our knowledge of how the brain learns to fear

October 23, 2017
Our brains wire themselves up during development according to a series of remarkable genetic programs that have evolved over millions of years. But so much of our behavior is the product of things we learn only after we emerge ...


Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Feb 22, 2014
The obvious implication is that the forms of processing underlying both music and language are more fundamental than either. Contrary to the commonly-held dictum, it is not music that piggybacks on all our other faculties, but rather all these other faculties, including our affinity for music itself, which depend upon common principles of processing that are elementary and universal, and which enumerate, qualify and centrally underpin this form of mental play, whether its output is semantic or more abstract..

Most fundamentally, a brain's whole raison d'etre is spatiotemporal freq / waveform analysis; however, this costs energy. Moreover though, erroneous outputs cost even more... This has inevitably culminated in a finely-honed system teetering on the brink of critical efficiency; a system that has to solve waveform and freq analysis problems not merely in spite of energy constraints, but moulded, driven and conducted by efficiency at the most fundamental level.
not rated yet Feb 22, 2014
This thermodynamic envelope naturally quantises the potentially-infinite field of possible inputs into discrete, easily managed elementary forms, that all possible inputs can be mapped onto, resolved by, and encoded against. The prime unit is thus the octave - not merely doublings or halvings of a freq, but all factors of two of it within the respective faculty's bandwidth. Factors of two of a given freq are the simplest possible relationships, resolving to the smallest processing nuclei, and thus octaves represent a baseline informational 'ground state' - registering as a difference of 'zero' with respect to this emergent system of enumeration. There's no consonance or dissonance here, only degrees of inequivalence - ie. degree of divergence from this thermodynamically-imposed 'zero' state. Thus, everything else - everything that is non-zero - is defined in relation to this entropic minima.
not rated yet Feb 22, 2014
Octaves represent the natural 'byte size' of our processing scheme - the divergence from zero puts the 'bits' in the byte. However this applies equally to temporal freqs, as to spatial ones - underwriting rhythm as much as harmony. Since spatiotemporal processing is, ultimately, all there is, language likewise maps onto this same emergent negentropic field.

Consider the opening cadence of Twinkle Twinkle (using case to denote pitch direction): ccGGAAg - the climb to g, lying in a factor of three to the fundamental c, poses a problem. The responding cadence ffeeddc solves the problem, by resolving the sequence back to the fundamental - however the range of possible solutions is practically unlimited, and the cadence could resolve to any factor of two of c, quite satisfactorily. Ffeeddc is just the simplest answer to the question; it's spatiotemporally symmetrical, equitable, untaxing and thus satisfactorily resolved as 'complete'.
not rated yet Feb 22, 2014
The sequence began as a divergence from zero, but then resolved back to it again. Subsequent verses, resolving to ever larger temporal integrations, merely expand upon this intrinsic self-similarity, nesting the same form of Q&A problems in longer sequences.

Of course, words and language DON'T need to neatly resolve to elementary forms like this, since that deviation is the stuff of the information they encode. But whether in private dialogue or a crowded bar, the auditory field maps onto a space in which all spatial or temporal factors of two of one other are aligned upon a zero plane, against which all other frequencies are enumerated and encoded. It's a natural, emergent 'default bandwidth' in time and space, inherent to multicellular information processing.
not rated yet Feb 23, 2014
For contrast, consider how waveforms are represented via raw binary, as a stack of freq-blind absolute amplitude samples, sequenced at constant speed; this is an accurate way to represent the audio field, provided the speed is a factor of two greater than the highest freq in the source, per Nyquist, however it's terribly inefficient, thermodynamically.

Hence the system brains use instead has more in common with a Fourier-transformed format such as MP3 - mapping not the absolute frequency values, but simply the relationships between them... and the metric of difference for this meta-information is the 'zero' arising at all factors of two, which we perceive as octave equivalence (ie. maximum, or elementary consonance) in the spatial domain, and elementary rhythm in the temporal domain.

Spatiotemporal components of vocal information likewise map onto this same system, but any 'higher' semantic value is independent from the syntactic freq spread and metrics emerging from its symmetries.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.