Study of jazz players shows common brain circuitry processes music and language

The brains of jazz musicians engrossed in spontaneous, improvisational musical conversation showed robust activation of brain areas traditionally associated with spoken language and syntax, which are used to interpret the structure of phrases and sentences. But this musical conversation shut down brain areas linked to semantics—those that process the meaning of spoken language, according to results of a study by Johns Hopkins researchers.

The study used functional magnetic resonance imaging (fMRI) to track the activity of jazz musicians in the act of "trading fours," a process in which musicians participate in spontaneous back and forth instrumental exchanges, usually four bars in duration. The musicians introduce new melodies in response to each other's musical ideas, elaborating and modifying them over the course of a performance.

The results of the study suggest that the brain regions that process syntax aren't limited to spoken , according to Charles Limb, M.D., an associate professor in the Department of Otolaryngology-Head and Neck Surgery at the Johns Hopkins University School of Medicine. Rather, he says, the brain uses the syntactic areas to process communication in general, whether through language or through music.

Limb, who is himself a musician and holds a faculty appointment at the Peabody Conservatory, says the work sheds important new light on the complex relationship between music and language.

"Until now, studies of how the brain processes auditory communication between two individuals have been done only in the context of spoken language," says Limb, the senior author of a report on the work that appears online Feb. 19 in the journal PLOS ONE. "But looking at jazz lets us investigate the neurological basis of interactive, musical communication as it occurs outside of .

"We've shown in this study that there is a fundamental difference between how meaning is processed by the brain for music and language. Specifically, it's syntactic and not semantic processing that is key to this type of musical communication. Meanwhile, conventional notions of semantics may not apply to musical processing by the brain."

To study the response of the brain to improvisational musical conversation between musicians, the Johns Hopkins researchers recruited 11 men aged 25 to 56 who were highly proficient in jazz piano performance. During each 10-minute session of trading fours, one musician lay on his back inside the MRI machine with a plastic piano keyboard resting on his lap while his legs were elevated with a cushion. A pair of mirrors was placed so the musician could look directly up while in the MRI machine and see the placement of his fingers on the keyboard. The keyboard was specially constructed so it did not have metal parts that would be attracted to the large magnet in the fMRI.

The improvisation between the musicians activated areas of the brain linked to syntactic processing for language, called the and posterior superior temporal gyrus. In contrast, the musical exchange deactivated brain structures involved in semantic processing, called the angular gyrus and supramarginal gyrus.

"When two jazz musicians seem lost in thought while trading fours, they aren't simply waiting for their turn to play," Limb says. "Instead, they are using the syntactic areas of their brain to process what they are hearing so they can respond by playing a new series of notes that hasn't previously been composed or practiced."

add to favorites email to friend print save as pdf

Related Stories

Music makes you smarter

Oct 26, 2009

Regularly playing a musical instrument changes the anatomy and function of the brain and may be used in therapy to improve cognitive skills.

This is your brain on Vivaldi and Beatles

Aug 07, 2013

Listening to music activates large networks in the brain, but different kinds of music are processed differently. A team of researchers from Finland, Denmark and the UK has developed a new method for studying ...

How the brain processes musical hallucinations

Jan 28, 2014

A woman with an "iPod in her head" has helped scientists at Newcastle University and University College London identify the areas of the brain that are affected when patients experience a rare condition called ...

Recommended for you

New ALS associated gene identified using innovative strategy

10 hours ago

Using an innovative exome sequencing strategy, a team of international scientists led by John Landers, PhD, at the University of Massachusetts Medical School has shown that TUBA4A, the gene encoding the Tubulin Alpha 4A protein, ...

Can bariatric surgery lead to severe headache?

11 hours ago

Bariatric surgery may be a risk factor for a condition that causes severe headaches, according to a study published in the October 22, 2014, online issue of Neurology, the medical journal of the American Academy of Neurol ...

Bipolar disorder discovery at the nano level

11 hours ago

A nano-sized discovery by Northwestern Medicine scientists helps explain how bipolar disorder affects the brain and could one day lead to new drug therapies to treat the mental illness.

Brain simulation raises questions

15 hours ago

What does it mean to simulate the human brain? Why is it important to do so? And is it even possible to simulate the brain separately from the body it exists in? These questions are discussed in a new paper ...

Human skin cells reprogrammed directly into brain cells

15 hours ago

Scientists have described a way to convert human skin cells directly into a specific type of brain cell affected by Huntington's disease, an ultimately fatal neurodegenerative disorder. Unlike other techniques ...

User comments

Adjust slider to filter visible comments by rank

Display comments: newest first

MrVibrating
not rated yet Feb 22, 2014
The obvious implication is that the forms of processing underlying both music and language are more fundamental than either. Contrary to the commonly-held dictum, it is not music that piggybacks on all our other faculties, but rather all these other faculties, including our affinity for music itself, which depend upon common principles of processing that are elementary and universal, and which enumerate, qualify and centrally underpin this form of mental play, whether its output is semantic or more abstract..

Most fundamentally, a brain's whole raison d'etre is spatiotemporal freq / waveform analysis; however, this costs energy. Moreover though, erroneous outputs cost even more... This has inevitably culminated in a finely-honed system teetering on the brink of critical efficiency; a system that has to solve waveform and freq analysis problems not merely in spite of energy constraints, but moulded, driven and conducted by efficiency at the most fundamental level.
MrVibrating
not rated yet Feb 22, 2014
This thermodynamic envelope naturally quantises the potentially-infinite field of possible inputs into discrete, easily managed elementary forms, that all possible inputs can be mapped onto, resolved by, and encoded against. The prime unit is thus the octave - not merely doublings or halvings of a freq, but all factors of two of it within the respective faculty's bandwidth. Factors of two of a given freq are the simplest possible relationships, resolving to the smallest processing nuclei, and thus octaves represent a baseline informational 'ground state' - registering as a difference of 'zero' with respect to this emergent system of enumeration. There's no consonance or dissonance here, only degrees of inequivalence - ie. degree of divergence from this thermodynamically-imposed 'zero' state. Thus, everything else - everything that is non-zero - is defined in relation to this entropic minima.
MrVibrating
not rated yet Feb 22, 2014
Octaves represent the natural 'byte size' of our processing scheme - the divergence from zero puts the 'bits' in the byte. However this applies equally to temporal freqs, as to spatial ones - underwriting rhythm as much as harmony. Since spatiotemporal processing is, ultimately, all there is, language likewise maps onto this same emergent negentropic field.

Consider the opening cadence of Twinkle Twinkle (using case to denote pitch direction): ccGGAAg - the climb to g, lying in a factor of three to the fundamental c, poses a problem. The responding cadence ffeeddc solves the problem, by resolving the sequence back to the fundamental - however the range of possible solutions is practically unlimited, and the cadence could resolve to any factor of two of c, quite satisfactorily. Ffeeddc is just the simplest answer to the question; it's spatiotemporally symmetrical, equitable, untaxing and thus satisfactorily resolved as 'complete'.
MrVibrating
not rated yet Feb 22, 2014
The sequence began as a divergence from zero, but then resolved back to it again. Subsequent verses, resolving to ever larger temporal integrations, merely expand upon this intrinsic self-similarity, nesting the same form of Q&A problems in longer sequences.

Of course, words and language DON'T need to neatly resolve to elementary forms like this, since that deviation is the stuff of the information they encode. But whether in private dialogue or a crowded bar, the auditory field maps onto a space in which all spatial or temporal factors of two of one other are aligned upon a zero plane, against which all other frequencies are enumerated and encoded. It's a natural, emergent 'default bandwidth' in time and space, inherent to multicellular information processing.
MrVibrating
not rated yet Feb 23, 2014
For contrast, consider how waveforms are represented via raw binary, as a stack of freq-blind absolute amplitude samples, sequenced at constant speed; this is an accurate way to represent the audio field, provided the speed is a factor of two greater than the highest freq in the source, per Nyquist, however it's terribly inefficient, thermodynamically.

Hence the system brains use instead has more in common with a Fourier-transformed format such as MP3 - mapping not the absolute frequency values, but simply the relationships between them... and the metric of difference for this meta-information is the 'zero' arising at all factors of two, which we perceive as octave equivalence (ie. maximum, or elementary consonance) in the spatial domain, and elementary rhythm in the temporal domain.

Spatiotemporal components of vocal information likewise map onto this same system, but any 'higher' semantic value is independent from the syntactic freq spread and metrics emerging from its symmetries.