Training computers to understand the human brain

October 8, 2012, Tokyo Institute of Technology
The activation maps of the two contrasts (hot color: mammal > tool ; cool color: tool > mammal) computed from the 10 datasets of our participants.

Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can 'think' and 'see' in the same way as humans. Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.

The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently 'label' each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool).

After 'training' the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.

Understanding how the categorizes information through signs and language is a key part of developing computers that can 'think' and 'see' in the same way as humans. It is only in recent years that the field of semantics has been explored through the analysis of brain scans and brain activity in response to both language-based and visual inputs. Teaching computers to read brain scans and interpret the language encoded in brain activity could have a variety of uses in medical science and .

Now, Hiroyuki Akama at the Graduate School of Decision , Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI to train a computer to predict the semantic category of an image originally viewed by five different people.

The five participants in the project were shown two sets of forty randomly arranged pictures during the experiment. The pictures came from two distinct categories – either an animal, or a hand tool. In the first session, twenty images of animals and twenty of hand tools were accompanied by the spoken Japanese name of each object (auditory). In the second session - shown to the participants several days later - the same twenty randomly ordered images were accompanied by Japanese written characters (orthography). Each participant was asked to silently 'label' each image with properties they associate with that object in their mind.

During each session, the participants were scanned using fMRI technology. This provided Akama and his team with 240 individual scans showing brain activity for each session. The researchers analyzed the brain scans using a technique called multi-voxel pattern analysis (MVPA). This involves using computer algorithms to identify repeating patterns of across voxels, the cube-shaped elements that make up the 3D scan images. Interestingly, animal pictures tended to induce activity in the visual part of the brain, whereas tool pictures triggered a response more from sensory-motor areas – a phenomenon reported in previous studies.

The MVPA results were then used to find out if the computer could predict whether or not the participants were viewing an animal or hand tool image by looking at the patterns in the scans.

Several different tests were given to the computer. After training the machine to recognise patterns related to 'animals' and 'tools' in some of the auditory session data for example, the computer correctly identified the remaining auditory data scans as animal or tool 80-90% of the time. The computer found the auditory data easier to predict, although it had a very similar success rate when identifying the orthographic session data.

Akama and his team then decided to try a cross-modal approach, namely training the computer using one session data set but testing it using the other. As perhaps would be expected, the for auditory and orthographic sessions differed, as people think in different ways when listening and reading. However, the computer suffered an even stronger performance penalty than anticipated, with success rates down to 65-75%. The exact reasons for this are unclear, although the researchers point to a combination of timing differences (the time taken for the participants to respond to written as opposed to auditory information) and spatial differences (the anatomy of the individuals' brains differing slightly and thereby affecting the voxel distributions).

One future application of experiments such as this could be the development of real-time -computer-interfaces. Such devices could allow patients with communication impairments to speak through a computer simply by thinking about what they want to say.

Explore further: Scientists probe connection between sight and touch in the brain

More information: H. Akama et al. Decoding semantics across fMRI sessions with different stimulus modalities: a practical MVPA study. Frontiers in Neuroinformatics 6 (24) (2012). doi: 10.3389/fninf.2012.00024

Related Stories

Scientists probe connection between sight and touch in the brain

September 8, 2011
Shakespeare famously referred to "the mind's eye," but scientists at USC now have also identified a "mind's touch."

Word association: Study matches brain scans with complex thought

August 31, 2011
In an effort to understand what happens in the brain when a person reads or considers such abstract ideas as love or justice, Princeton researchers have for the first time matched images of brain activity with categories ...

In the brain, winning is everywhere

October 5, 2011
Winning may not be the only thing, but the human brain devotes a lot of resources to the outcome of games, a new study by Yale researchers suggest.

'Harmless' condition shown to alter brain function in elderly

August 13, 2012
Researchers at the Mayo Clinic say a common condition called leukoaraiosis, made up of tiny areas in the brain that have been deprived of oxygen and appear as bright white dots on MRI scans, is not a harmless part of the ...

Recommended for you

Research reveals atomic-level changes in ALS-linked protein

January 18, 2018
For the first time, researchers have described atom-by-atom changes in a family of proteins linked to amyotrophic lateral sclerosis (ALS), a group of brain disorders known as frontotemporal dementia and degenerative diseases ...

Fragile X finding shows normal neurons that interact poorly

January 18, 2018
Neurons in mice afflicted with the genetic defect that causes Fragile X syndrome (FXS) appear similar to those in healthy mice, but these neurons fail to interact normally, resulting in the long-known cognitive impairments, ...

How your brain remembers what you had for dinner last night

January 17, 2018
Confirming earlier computational models, researchers at University of California San Diego and UC San Diego School of Medicine, with colleagues in Arizona and Louisiana, report that episodic memories are encoded in the hippocampus ...

Recording a thought's fleeting trip through the brain

January 17, 2018
University of California, Berkeley neuroscientists have tracked the progress of a thought through the brain, showing clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response ...

Midbrain 'start neurons' control whether we walk or run

January 17, 2018
Locomotion comprises the most fundamental movements we perform. It is a complex sequence from initiating the first step, to stopping when we reach our goal. At the same time, locomotion is executed at different speeds to ...

Neuroscientists suggest a model for how we gain volitional control of what we hold in our minds

January 16, 2018
Working memory is a sort of "mental sketchpad" that allows you to accomplish everyday tasks such as calling in your hungry family's takeout order and finding the bathroom you were just told "will be the third door on the ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

Tausch
not rated yet Oct 13, 2012
A heroic effort.

Akama and his team then decided to try a cross-modal approach, namely training the computer using one session data set but testing it using the other. As perhaps would be expected, the brain scans for auditory and orthographic sessions differed, as people think in different ways when listening and reading. - Author of article (unknown)


Changing one word in above paragraph might help readers and researchers grasp where they stray from a path that will lead to one of their goals - real-tie brain-computer-interfaces.

...as people ASSOCIATE (origin word is 'think') in different ways when listening and reading.


It's like the language of math - one symbol makes or breaks the proof.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.