Brain's 'thesaurus' mapped to help decode inner thoughts

April 27, 2016
This image shows a 3D view of one person's cerebral cortex. The color of each voxel indicates its semantic selectivity, or which category of words it is selective for. For example, green voxels are mostly selective for visual and tactile concepts, while red voxels are mostly selective social concepts. White lines show the outlines of known functional brain regions. Visualizations created by Alexander Huth using pycortex software (pycortex.org) by James Gao, Mark Lescroart, and Alexander Huth

What if a map of the brain could help us decode people's inner thoughts?

Scientists at the University of California, Berkeley, have taken a step in that direction by building a "semantic atlas" that shows in vivid colors and multiple dimensions how the organizes language. The atlas identifies brain areas that respond to words that have similar meanings.

The findings, to be published April 28, 2016 in the journal Nature, are based on a brain imaging study that recorded neural activity while study volunteers listened to stories from the "Moth Radio Hour." They show that at least one-third of the brain's cerebral cortex, including areas dedicated to high-level cognition, is involved in language processing.

Notably, the study found that different people share similar language maps: "The similarity in semantic topography across different subjects is really surprising," said study lead author Alex Huth, a postdoctoral researcher in neuroscience at UC Berkeley.

Detailed maps showing how the brain organizes different words by their meanings could eventually help give voice to those who cannot speak, such as victims of stroke or brain damage, or such as ALS.

The Brain Dictionary. Credit: Nature Video

While mind-reading technology remains far off on the horizon, charting how language is organized in the brain brings the decoding of inner dialogue a step closer to reality, the researchers said.

For example, clinicians could track the brain activity of patients who have difficulty communicating and then match that data to semantic language maps to determine what their patients are trying to express. Another potential application is a decoder that translates what you say into another language as you speak.

"To be able to map out semantic representations at this level of detail is a stunning accomplishment," said Kenneth Whang, a program director in the National Science Foundation's Information and Intelligent Systems division. "In addition, they are showing how data-driven computational methods can help us understand the brain at the level of richness and complexity that we associate with human cognitive processes."

This video shows a 3D view of one person's cerebral cortex as it is inflated to reveal what is inside the cortical folds (“sulci”). The color of each voxel indicates its semantic selectivity, or which category of words it is selective for. For example, green voxels are mostly selective for visual and tactile concepts, while red voxels are mostly selective social concepts. White lines show the outlines of known functional brain regions. Credit: Visualizations created by Alexander Huth using pycortex software (pycortex.org) by James Gao, Mark Lescroart, and Alexander Huth.

Huth and six other native English-speakers served as subjects for the experiment, which required volunteers to remain still inside the functional Magnetic Resonance Imaging scanner for hours at a time.

Each study participant's was measured as they listened, with eyes closed and headphones on, to more than two hours of stories from the "Moth Radio Hour," a public radio show in which people recount humorous and/or poignant autobiographical experiences.

Their brain imaging data were then matched against time-coded, phonemic transcriptions of the stories. Phonemes are units of sound that distinguish one word from another.

This video shows spinning 3D views of the cerebral cortex of three people. The color of each voxel indicates its semantic selectivity, or which category of words it is selective for. For example, green voxels are mostly selective for visual and tactile concepts, while red voxels are mostly selective social concepts. White lines show the outlines of known functional brain regions. The pattern of semantic selectivity is very similar across people. Credit: Visualizations created by Alexander Huth using pycortex software (pycortex.org) by James Gao, Mark Lescroart, and Alexander Huth.

That information was then fed into a word-embedding algorithm that scored words according to how closely they are related semantically.

The results were converted into a thesaurus-like map where the words are arranged on the flattened cortices of the left and right hemispheres of the brain rather than on the pages of a book. Words were grouped under various headings: visual, tactile, numeric, locational, abstract, temporal, professional, violent, communal, mental, emotional and social.

Not surprisingly, the maps show that many areas of the human brain represent language that describes people and social relations rather than abstract concepts.

This video shows a spinning 3D view of one person's cerebral cortex. The color of each voxel indicates its semantic selectivity, or which category of words it is selective for. For example, green voxels are mostly selective for visual and tactile concepts, while red voxels are mostly selective social concepts. White lines show the outlines of known functional brain regions. Credit: Visualizations created by Alexander Huth using pycortex software (pycortex.org) by James Gao, Mark Lescroart, and Alexander Huth.

"Our semantic models are good at predicting responses to language in several big swaths of cortex," Huth said. "But we also get the fine-grained information that tells us what kind of information is represented in each area. That's why these maps are so exciting and hold so much potential."

"Although the maps are broadly consistent across individuals, there are also substantial individual differences," said study senior author Jack Gallant, a UC Berkeley neuroscientist. "We will need to conduct further studies across a larger, more diverse sample of people before we will be able to map these individual differences in detail."

Explore further: Key advance: Neuroscientists get a new look into how we read

More information: Alexander G. Huth et al. Natural speech reveals the semantic maps that tile human cerebral cortex, Nature (2016). DOI: 10.1038/nature17637

Related Stories

Key advance: Neuroscientists get a new look into how we read

April 7, 2016
Neuroscientists at UC Davis have come up with a way to observe brain activity during natural reading. It's the first time researchers have been able to study the brain while reading actual texts, instead of individual words, ...

Transcranial direct current stimulation can boost language comprehension

April 8, 2016
How the human brain processes the words we hear and constructs complex concepts is still somewhat of a mystery to the neuroscience community. Transcranial direct current stimulation (tDCS) can alter our language processing, ...

Markov-inverse-F measure—a network connectivity approach using MVPA of fMRI

February 23, 2016
Multi-Voxel Pattern Analysis (MVPA) in functional magnetic resonance imaging (fMRI) studies is considered effective for studying how the human brain represents the meanings of words, when combined with a representational ...

Brain study suggests neural networks related to mathematics are different from those used for language

April 12, 2016
(Medical Xpress)—A pair of researchers with Université Paris-Sud and Université Paris-Saclay has found via fMRI human brain studies that the neural networks used to process mathematics are different from those that are ...

'Broca's area' processes both language and music at the same time

November 10, 2015
When you read a book and listen to music, the brain doesn't keep these two tasks nicely separated. A new study shows there is an area in the brain which is busy with both at the same time: Broca's area. This area has been ...

Study reveals how the brain categorizes thousands of objects and actions

December 19, 2012
Humans perceive numerous categories of objects and actions, but where are these categories represented spatially in the brain?

Recommended for you

Theory: Flexibility is at the heart of human intelligence

November 19, 2017
Centuries of study have yielded many theories about how the brain gives rise to human intelligence. Some neuroscientists think intelligence springs from a single region or neural network. Others argue that metabolism or the ...

Investigating patterns of degeneration in Alzheimer's disease

November 17, 2017
Alzheimer's disease (AD) is known to cause memory loss and cognitive decline, but other functions of the brain can remain intact. The reasons cells in some brain regions degenerate while others are protected is largely unknown. ...

Study may point to new treatment approach for ASD

November 17, 2017
Using sophisticated genome mining and gene manipulation techniques, researchers at Vanderbilt University Medical Center (VUMC) have solved a mystery that could lead to a new treatment approach for autism spectrum disorder ...

Neuroscientists find chronic stress skews decisions toward higher-risk options

November 16, 2017
Making decisions is not always easy, especially when choosing between two options that have both positive and negative elements, such as deciding between a job with a high salary but long hours, and a lower-paying job that ...

Paraplegic rats walk and regain feeling after stem cell treatment

November 16, 2017
Engineered tissue containing human stem cells has allowed paraplegic rats to walk independently and regain sensory perception. The implanted rats also show some degree of healing in their spinal cords. The research, published ...

Brain implant tested in human patients found to improve memory recall

November 15, 2017
(Medical Xpress)—A team of researchers with the University of Southern California and the Wake Forest School of Medicine has conducted experiments involving implanting electrodes into the brains of human volunteers to see ...

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

RhoidSlayer
not rated yet Apr 27, 2016
YES !
the ontologic and semantic homunculi.
local circuits instantiating function in a networked hierarchy of complexity.
Beautiful. Well done !
tyristcheget
not rated yet May 06, 2016
Made device for reading human thoughts / human mind reading machine / Brain Computer Interface. In particular, I have created a perfect Speech Generating Device for people with Locked - in Syndrome ( LiS ) and ALS ( such as British physicist Stephen Hawking or Steve Gleason and Mauril Belanger problem ). Discovery is not published. I invite partnership. About the problem look : Tom Mitchell and Marcel Just; John - Dylan Haynes; Andrea Stocco and Rajesh Rao, human mind reading machine etc.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.