Neurons have the right shape for deep learning

December 4, 2017, Canadian Institute for Advanced Research
A neuron recorded in Blake Richard's lab. Credit: Blake Richards

Deep learning has brought about machines that can 'see' the world more like humans can, and recognize language. And while deep learning was inspired by the human brain, the question remains: Does the brain actually learn this way? The answer has the potential to create more powerful artificial intelligence and unlock the mysteries of human intelligence.

In a study published December 5th in eLife, CIFAR Fellow Blake Richards and his colleagues unveiled an that simulates how could work in our brains. The network shows that certain mammalian have the shape and electrical properties that are well-suited for deep learning. Furthermore, it represents a more biologically realistic way of how real brains could do deep learning.

Research was conducted by Richards and his graduate student Jordan Guerguiev, at the University of Toronto, Scarborough, in collaboration with Timothy Lillicrap at Google DeepMind. Their algorithm was based on neurons in the neocortex, which is responsible for higher order thought.

"Most of these neurons are shaped like trees, with 'roots' deep in the and 'branches' close to the surface," says Richards. "What's interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree."

Using this knowledge of the neurons' structure, Richards and Guerguiev built a model that similarly received signals in segregated compartments. These sections allowed simulated neurons in different layers to collaborate, achieving deep learning.

"It's just a set of simulations so it can't tell us exactly what our brains are doing, but it does suggest enough to warrant further experimental examination if our own brains may use the same sort of algorithms that they use in AI," Richards says.

Illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Credit: CIFAR

This research idea goes back to AI pioneers Geoffrey Hinton, a CIFAR Distinguished Fellow and founder of the Learning in Machines & Brains program, and program Co-Director Yoshua Bengio, and was one of the main motivations for founding the program in the first place. These researchers sought not only to develop , but also to understand how the learns, says Richards.

In the early 2000s, Richards and Lillicrap took a course with Hinton at the University of Toronto and were convinced deep learning models were capturing "something real" about how human brains work. At the time, there were several challenges to testing that idea. Firstly, it wasn't clear that deep learning could achieve human-level skill. Secondly, the algorithms violated biological facts proven by neuroscientists.

Now, Richards and a number of researchers are looking to bridge the gap between neuroscience and AI. This paper builds on research from Bengio's lab on a more biologically plausible way to train neural nets and an algorithm developed by Lillicrap that further relaxes some of the rules for training neural nets. The paper also incorporates research from Matthew Larkam on the structure of neurons in the neocortex. By combining neurological insights with existing algorithms, Richards' team was able to create a better and more realistic algorithm simulating learning in the brain.

The tree-like neocortex neurons are only one of many types of cells in the brain. Richards says future research should model different brain cells and examine how they could interact together to achieve deep learning. In the long-term, he hopes researchers can overcome major challenges, such as how to learn through experience without receiving feedback.

"What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience," Richards says.

"Towards deep learning with segregated dendrites" was published in eLife on Dec. 5.

Explore further: Forgetting can make you smarter

More information: eLife, DOI: 10.7554/eLife.22901

Related Stories

Forgetting can make you smarter

June 21, 2017
For most people having a good memory means being able to remember more information clearly for long periods of time. For neuroscientists too, the inability to remember was long believed to represent a failure of the brain's ...

New theory of how the brain transforms sensations into mental objects

November 15, 2017
Inputs to the brain from the eyes, ears, and skin are continually changing as we move. Yet our brain perceives objects in the world as stable. How the brain learns the structure of the world from rapidly changing inputs is ...

'Godfather' of deep learning is reimagining AI

November 3, 2017
Geoffrey Hinton may be the "godfather" of deep learning, a suddenly hot field of artificial intelligence, or AI – but that doesn't mean he's resting on his algorithms.

How insights into human learning can foster smarter artificial intelligence

June 14, 2016
Recent breakthroughs in creating artificial systems that outplay humans in a diverse array of challenging games have their roots in neural networks inspired by information processing in the brain. In a Review published June ...

IBM peers into Numenta machine intelligence approach

April 9, 2015
Are we nowhere near the limits to which machines can make sense out of raw data? Some scientists would say that today's programmed computers cannot match a computer approach using biological learning principles for next ...

Artificial neural networks decode brain activity during performed and imagined movements

August 18, 2017
Artificial intelligence has far outpaced human intelligence in certain tasks. Several groups from the Freiburg excellence cluster BrainLinks-BrainTools led by neuroscientist private lecturer Dr. Tonio Ball are showing how ...

Recommended for you

Forty percent of people have a fictional first memory, says study

July 17, 2018
Researchers have conducted one of the largest surveys of people's first memories, finding that nearly 40 per cent of people had a first memory which is fictional.

Insight without incision: Advances in noninvasive brain imaging offers improvements to epilepsy surgery

July 17, 2018
About a third of epilepsy sufferers require treatment through surgery. To check for severe epilepsy, clinicians use a surgical procedure called electrocorticography (ECoG). An ECoG maps a section of brain tissue to help clinicians ...

Protein found to be key component in irregularly excited brain cells

July 17, 2018
In a new study in mice, researchers have identified a key protein involved in the irregular brain cell activity seen in autism spectrum disorders and epilepsy. The protein, p53, is well-known in cancer biology as a tumor ...

New drug target for remyelination in MS is identified

July 17, 2018
Remyelination, the spontaneous regeneration of the fatty insulator in the brain that keeps neurons communicating, has long been seen as crucial to the next big advance in treating multiple sclerosis (MS). However, a lack ...

Artificial neural networks now able to help reveal a brain's structure

July 17, 2018
The function of the brain is based on the connections between nerve cells. In order to map these connections and to create the connectome, the "wiring diagram" of a brain, neurobiologists capture images of the brain with ...

Convergence of synaptic signals is mediated by a protein critical for learning and memory

July 16, 2018
Inside the brain, is a complex symphony of perfectly coordinated signaling. Hundreds of different molecules amplify, modify and carry information from tiny synaptic compartments all the way through the entire length of a ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.