Dissecting artificial intelligence to better understand the human brain

March 25, 2018, Cognitive Neuroscience Society

In the natural world, intelligence takes many forms. It could be a bat using echolocation to expertly navigate in the dark, or an octopus quickly adapting its behavior to survive in the deep ocean. Likewise, in the computer science world, multiple forms of artificial intelligence are emerging - different networks each trained to excel in a different task. And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain.

"The fundamental questions cognitive neuroscientists and computer scientists seek to answer are similar," says Aude Oliva of MIT. "They have a complex system made of components - for one, it's called and for the other, it's called units - and we are doing experiments to try to determine what those components calculate."

In Oliva's work, which she is presenting at the CNS symposium, neuroscientists are learning much about the role of contextual clues in human image recognition. By using "artificial neurons" - essentially lines of code, software - with neural models, they can parse out the various elements that go into recognizing a specific place or object.

"The brain is a deep and complex neural network," says Nikolaus Kriegeskorte of Columbia University, who is chairing the symposium. "Neural network models are brain-inspired models that are now state-of-the-art in many artificial intelligence applications, such as computer vision."

In one recent study of more than 10 million images, Oliva and colleagues taught an artificial network to recognize 350 different places, such as a kitchen, bedroom, park, living room, etc. They expected the network to learn objects such as a bed associated with a bedroom. What they didn't expect was that the network would learn to recognize people and animals, for example dogs at parks and cats in living rooms.

The machine programs learn very quickly when given lots of data, which is what enables them to parse contextual learning at such a fine level, Oliva says. While it is not possible to dissect human neurons at such a level, the computer performing a similar task is entirely transparent. The serve as "mini-brains that can be studied, changed, evaluated, compared against responses given by human neural networks, so the cognitive neuroscientists have some sort of sketch of how a real brain may function."

Indeed, Kriegeskorte says that these models have helped neuroscientists understand how people can recognize the objects around them in the blink of an eye. "This involves millions of signals emanating from the retina, that sweep through a sequence of layers of neurons, extracting semantic information, for example that we're looking at a street scene with several people and a dog," he says. "Current neural network models can perform this kind of task using only computations that biological neurons can perform. Moreover, these neural network models can predict to some extent how a neuron deep in the brain will respond to any image."

Using computer science to understand the human brain is a relatively new field that is expanding rapidly thanks to advancements in computing speed and power, along with neuroscience imaging tools. The artificial networks cannot yet replicate human visual abilities, Kriegeskorte says, but by modeling the human brain, they are furthering understanding of both cognition and . "It's a uniquely exciting time to be working at the intersection of neuroscience, cognitive science, and AI," he says.

Indeed, Oliva says; "Human cognitive and computational neuroscience is a fast-growing area of research, and knowledge about how the is able to see, hear, feel, think, remember, and predict is mandatory to develop better diagnostic tools, to repair the , and to make sure it develops well."

Explore further: Researchers demonstrate 'mind-reading' brain-decoding tech

More information: Oliva and Kriegeskorte are presenting in the symposium "Human and machine cognition: The deep learning challenge" at the CNS annual meeting in Boston. More than 1,500 scientists are attending the meeting from March 24-27, 2018.

Related Stories

Researchers demonstrate 'mind-reading' brain-decoding tech

October 23, 2017
Researchers have demonstrated how to decode what the human brain is seeing by using artificial intelligence to interpret fMRI scans from people watching videos, representing a sort of mind-reading technology.

Neurons have the right shape for deep learning

December 4, 2017
Deep learning has brought about machines that can 'see' the world more like humans can, and recognize language. And while deep learning was inspired by the human brain, the question remains: Does the brain actually learn ...

Computer approaches human skill for first time in mapping brain

August 17, 2017
A WSU research team for the first time has developed a computer algorithm that is nearly as accurate as people are at mapping brain neural networks—a breakthrough that could speed up the image analysis that researchers ...

New technique elucidates the inner workings of neural networks trained on visual data

June 30, 2017
Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today's best-performing artificial intelligence systems, from speech recognition systems, to automatic ...

Hacking the human brain—lab-made synapses for artificial intelligence

June 28, 2017
One of the greatest challenges facing artificial intelligence development is understanding the human brain and figuring out how to mimic it. Now, one group reports in ACS Nano that they have developed an artificial synapse ...

Recommended for you

First impressions count, new speech research confirms

October 22, 2018
Human beings make similar judgements of the trustworthiness and dominance of an unfamiliar speaker after hearing just a single word, new research shows, suggesting the old saying that 'first impressions count' might well ...

A topical gel that can prevent nerve damage due to spraying crops with pesticides

October 22, 2018
A team of researchers affiliated with several institutions in India has developed a topical get that can be used by farmers to prevent nerve damage due to chemical crop spraying. In their paper published in the journal Science ...

Research shows signalling mechanism in the brain shapes social aggression

October 19, 2018
Duke-NUS researchers have discovered that a growth factor protein, called brain-derived neurotrophic factor (BDNF), and its receptor, tropomyosin receptor kinase B (TrkB) affects social dominance in mice. The research has ...

Good spatial memory? You're likely to be good at identifying smells too

October 19, 2018
People who have better spatial memory are also better at identifying odors, according to a study published this week in Nature Communications. The study builds on a recent theory that the main reason that a sense of smell ...

How clutch molecules enable neuron migration

October 19, 2018
The brain can discriminate over 1 trillion odors. Once entering the nose, odor-related molecules activate olfactory neurons. Neuron signals first accumulate at the olfactory bulb before being passed on to activate the appropriate ...

Scientists discover the region of the brain that registers excitement over a preferred food option

October 19, 2018
At holiday buffets and potlucks, people make quick calculations about which dishes to try and how much to take of each. Johns Hopkins University neuroscientists have found a brain region that appears to be strongly connected ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.