Mimicking the brain, in silicon: New computer chip models how neurons communicate

by Anne Trafton
Fabricated analog very-large-scale integration (VLSI) chip used to mimic neuronal processes involved in memory and learning. Image: Guy Rachmuth

For decades, scientists have dreamed of building computer systems that could replicate the human brain’s talent for learning new tasks.

MIT researchers have now taken a major step toward that goal by designing a computer that mimics how the brain’s neurons adapt in response to new information. This phenomenon, known as plasticity, is believed to underlie many brain functions, including learning and memory.

With about 400 transistors, the silicon chip can simulate the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. The researchers anticipate this chip will help neuroscientists learn much more about how the brain works, and could also be used in neural prosthetic devices such as artificial retinas, says Chi-Sang Poon, a principal research scientist in the Harvard-MIT Division of Health Sciences and Technology.

Poon is the senior author of a paper describing the chip in the Proceedings of the National Academy of Sciences the week of Nov. 14. Guy Rachmuth, a former postdoc in Poon’s lab, is lead author of the paper. Other authors are Mark Bear, the Picower Professor of Neuroscience at MIT, and Harel Shouval of the University of Texas Medical School.

Modeling synapses

There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. A synapse is the gap between two neurons (known as the presynaptic and postsynaptic neurons). The presynaptic neuron releases neurotransmitters, such as glutamate and GABA, which bind to receptors on the postsynaptic cell membrane, activating . Opening and closing those channels changes the cell’s electrical potential. If the potential changes dramatically enough, the cell fires an electrical impulse called an action potential.

All of this synaptic activity depends on the ion channels, which control the flow of charged atoms such as sodium, potassium and calcium. Those channels are also key to two processes known as long-term potentiation (LTP) and long-term depression (LTD), which strengthen and weaken synapses, respectively.

The MIT researchers designed their computer chip so that the transistors could mimic the activity of different ion channels. While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell.

“We can tweak the parameters of the circuit to match specific ion channels,” Poon says. “We now have a way to capture each and every ionic process that’s going on in a neuron.”

Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says.

The new chip represents a “significant advance in the efforts to incorporate what we know about the biology of neurons and synaptic plasticity onto CMOS [complementary metal-oxide-semiconductor] chips,” says Dean Buonomano, a professor of neurobiology at the University of California at Los Angeles, adding that “the level of biological realism is impressive.

The MIT researchers plan to use their chip to build systems to model specific neural functions, such as the visual processing system. Such systems could be much faster than digital computers. Even on high-capacity , it takes hours or days to simulate a simple brain circuit. With the analog chip system, the simulation is even faster than the biological system itself.

Another potential application is building chips that can interface with biological systems. This could be useful in enabling communication between neural prosthetic devices such as artificial retinas and the brain. Further down the road, these chips could also become building blocks for artificial intelligence devices, Poon says.

Debate resolved

The MIT researchers have already used their chip to propose a resolution to a longstanding debate over how LTD occurs.

One theory holds that LTD and LTP depend on the frequency of action potentials stimulated in the postsynaptic cell, while a more recent theory suggests that they depend on the timing of the action potentials’ arrival at the synapse.

Both require the involvement of ion channels known as NMDA receptors, which detect postsynaptic activation. Recently, it has been theorized that both models could be unified if there were a second type of receptor involved in detecting that activity. One candidate for that second receptor is the endo-cannabinoid receptor.

Endo-cannabinoids, similar in structure to marijuana, are produced in the brain and are involved in many functions, including appetite, pain sensation and memory. Some neuroscientists had theorized that endo-cannabinoids produced in the postsynaptic cell are released into the synapse, where they activate presynaptic endo-cannabinoid receptors. If NMDA receptors are active at the same time, LTD occurs.

When the researchers included on their chip transistors that model endo-cannabinoid receptors, they were able to accurately simulate both LTD and LTP. Although previous experiments supported this theory, until now, “nobody had put all this together and demonstrated computationally that indeed this works, and this is how it works,” Poon says.


This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Related Stories

The biology behind alcohol-induced blackouts

Jul 07, 2011

(Medical Xpress) -- A person who drinks too much alcohol may be able to perform complicated tasks, such as dancing, carrying on a conversation or even driving a car, but later have no memory of those escapades. ...

Synthetic synapse mimics dynamic memory in human brain

Jul 22, 2011

Researchers from UCLA and Japan have designed a synthetic synapse for use in computing equipment that mimics the function of synapses in the human brain. The silver sulfide, nanoscale synapse, or "atomic switch," demonstrates ...

Recommended for you

Neurons in human skin perform advanced calculations

15 hours ago

Neurons in human skin perform advanced calculations, previously believed that only the brain could perform. This is according to a study from Umeå University in Sweden published in the journal Nature Ne ...

Memory in silent neurons

Aug 31, 2014

When we learn, we associate a sensory experience either with other stimuli or with a certain type of behavior. The neurons in the cerebral cortex that transmit the information modify the synaptic connections ...

Why your favourite song takes you down memory lane

Aug 28, 2014

Music triggers different functions of the brain, which helps explain why listening to a song you like might be enjoyable but a favourite song may plunge you into nostalgia, scientists said on Thursday.

User comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Vendicar_Decarian
2.3 / 5 (3) Nov 15, 2011
A valuable tool to discover how groups of existing neurons function but entirely unnecessary for the development of neuron based AI simulation.

They should instead be developing chips that can simulate with hardware assistance the non-linear summation of hundreds of inputs to cause the triggering of hundreds of outputs, with each input and output having a programmed trigger threshold sensitivity.

The chip should have multiple processing cores capable of churning through large numbers of such "vectors" at neuronal like speeds so that one chip can emulate - although not exactly - thousands of neurons.

Then you build a series of brains, starting from the most primitive organisms and then working up the complexity through worms, Republicans, mice and eventually the brain of an ape.
Isaacsname
not rated yet Nov 15, 2011
The ENIAC of our generation ?
socean
not rated yet Nov 15, 2011

Then you build a series of brains, starting from the most primitive organisms and then working up the complexity through worms, Republicans, mice and eventually the brain of an ape.


Nicely done.
Code_Warrior
5 / 5 (1) Nov 15, 2011
A valuable tool to discover how groups of existing neurons function but entirely unnecessary for the development of neuron based AI simulation.

They should instead be developing chips that can simulate with hardware assistance the non-linear summation of hundreds of inputs to cause the triggering of hundreds of outputs, with each input and output having a programmed trigger threshold sensitivity.

The chip should have multiple processing cores capable of churning through large numbers of such "vectors" at neuronal like speeds so that one chip can emulate - although not exactly - thousands of neurons.

Then you build a series of brains, starting from the most primitive organisms and then working up the complexity through worms, Republicans, mice and eventually the brain of an ape.

Decari-Tard, striving desperately to raise himself to Quantum Conundrum levels of intellect with his post, only achieves worm status, falling short of his dream to become a Republican.
Deesky
5 / 5 (2) Nov 15, 2011
A valuable tool to discover how groups of existing neurons function but entirely unnecessary for the development of neuron based AI simulation.

Disagree, somewhat. Yes, you can use conventional supercomputers to simulate neurons, but the problem is, they're still way too slow to simulate fully biologically accurate neurons with a large number of interconnections (project BlueBrain not withstanding).

If they can do this in hardware and achieve decent densities (ie, large numbers of interconnected neurons), then I think this will prove to be very valuable indeed for AI research.

Also remember, even animals with miniscule brains (few neurons) are capable of doing some amazing things that current computers struggle to do.
Deesky
5 / 5 (1) Nov 15, 2011
They should instead be developing chips that can simulate with hardware assistance the non-linear summation of hundreds of inputs to cause the triggering of hundreds of outputs, with each input and output having a programmed trigger threshold sensitivity

I thought that's what they were doing - potentials leading to trigger events and neural rewiring.

The chip should have multiple processing cores capable of churning through large numbers...

Cores? I don't think so. That's precisely what they're trying to move away from with this architecture.

...at neuronal like speeds so that one chip can emulate - although not exactly - thousands of neurons.

Neural speeds are slow compared to electronics. This research points out that they can emulate in analog fashion what real neurons do, but much faster. And the whole point is to increase the faithfulness of the simulation, not to take shortcuts leading to less accuracy.
hush1
not rated yet Nov 16, 2011
Where to start? At the beginning? First breath maybe? Portable fMRI the newborn before, during, and after whacking him/her.

Breathing is really, really complicated. So getting those first pictures of a brain 'springing or going into action' will shed light on where, what, how, and when a 'newbie' handles such a humongous first task.

Of course, all newborns make this look 'easy'.
Those images will make the moon landing look insignificant.
And enjoy more viewers too. NCC fans will go ape s..t and be thankful too. Something worth modeling: breath of life.

Were you conscious? Of your first breath? Do you have to be conscious to take your first breath?

Digressing and daydreaming again. Nothing new from me.
Vendicar_Decarian
1 / 5 (1) Nov 16, 2011
"I thought that's what they were doing - potentials leading to trigger events and neural rewiring." - Deesky

400 transistors is way too many. At current levels of integration you might be able to get to 1 million neurons per wafer that way. So you would need 100,000 or so wafers to build a human brain provided you could get the chips wired in 3d.

One downside is that it would operate thousands of times faster than a human brain making it difficult to diagnose problems, etc.

Better to put the speed to use in reducing the number of silicon neurons required. That way you may sacrifice 999 time the speed but reduce your chip count to 100.

In any case, simulating neurons with precision is certainly not necessary. If it was needed then intelligence would never have evolved in the first place.

Intelligence is plastic and must be reasonably independent on the underlying processing elements. It must be robust because neurons are quite unstable in their response to stimuli.

cont
Vendicar_Decarian
1 / 5 (1) Nov 16, 2011
For the study of how existing systems of neurons function this chip will probably make life easier.

However for the construction of a brain variations on the perceptron are all that should be needed.
Vendicar_Decarian
1 / 5 (1) Nov 16, 2011
"Breathing is really, really complicated." - hush1

Is it? It can be emulated with a 4004.

Problem solved.
Vendicar_Decarian
1 / 5 (1) Nov 16, 2011
"Cores? I don't think so. That's precisely what they're trying to move away from with this architecture." - Deesky

They want to move away from traditional general purpose cores.

Specifically designed neural processing cores make much more sense, where one long 2000 entry vector assigns in one go all of the synapse connections, input and output weights, thresholds, and trigger sensitivities, and in one clock cycle all of those values are churned together in parallel and made available at the outputs, and written out to a local on chip store.

The next cycle does the next neuron, etc, etc until a logical array of n by n by n neurons is computed.

Vendicar_Decarian
1 / 5 (1) Nov 16, 2011
"Decari-Tard, striving desperately to raise himself to Quantum Conundrum levels of intellect with his post" - Code Warrior

Nerual nets have been a mixed success primarily because their programming is time consuming making large arrays of them impractical.

They are essentially layers of matrix multipliers where the input is multiplied by some matrix to produce an output to another layer. Over several layers a desired output is programmed into the layers by a method called back propagation.

The multiplication isn't linear though. If it were then all of the layers could be reduced to a single layer by simply multiplying all of the intermediate matrices together.

The multiplications are made non-linear and a single perceptron threshold is generally thrown in for good measure to swizzle up the mix.

The technique remains valid but requires higher speeds so that more and larger layers can be produced, and then layered themselves to produce ever higher levels of heuristic logic.
hush1
not rated yet Nov 16, 2011
The project faces failure.
The researchers must account for microglia.

http://medicalxpr...une.html

This comment is without digression or daydreaming.
larkforsure
not rated yet Nov 16, 2011
[ Begging for helps ] Complaint about Human Rights Violations by IBM China on Centennial

Please Google:

IBM detained mother of ex-employee on the day of centennial
or
How Much IBM Can Get Away with is the Responsibility of the Media
or
Tragedy of Labor Rights Repression in IBM China
Deesky
not rated yet Nov 16, 2011
One downside is that it would operate thousands of times faster than a human brain making it difficult to diagnose problems, etc.

I think you're looking at this from the 'old school' engineering perspective, where everything is designed, controlled and timed to a set specification. These types of circuits (while still needing to function as designed) are about forming connections and modes of operation that emerge 'organically', ie, not programmed in. As such, the system as a whole would be inherently difficult, if not impossible, to diagnose.

Better to put the speed to use in reducing the number of silicon neurons required.

But that is the opposite of what this research is trying to attempt.

In any case, simulating neurons with precision is certainly not necessary.

Tell that to the BlueBrain project! Even with neural nets, more faithful neural simulations have led to more 'intelligent' behaviors.
hush1
not rated yet Nov 17, 2011
This is what this project must perform and accomplish on the same grand scale the brain functions:

http://www.newsci...und.html

Every physical object assessable to all the percepts from all the senses must have a representation (association) of every physical object/event a human being will encounter on earth.

Once those original physical objects and/or events are 'imprinted'(committed to and occupied by neuronal pathways), each and every single physical object/event must have an interchangeable representational association (pathway)to each percept under each other representing the original physical objects/events:

Seeing with sound, hearing with sight, and so forth.

Once that grand original foundation is established, all additional informational percepts/inputs from physical objects/events must undergo the same process as was used to establish the original grand foundation.

cont...
hush1
not rated yet Nov 17, 2011
cont...
That is the tip of the iceberg... we move on:

The assignment of the physiological representations committed to and occupied by neuronal pathways are to be added to the grand original foundation as well as to the expected future additional inputs sourced from the continuous physical objects/events.

And we stop only here because of character limit. And sum up the entire two comments written by me with the first sentence the article:

MIT researchers have now taken a major step toward that goal by designing a computer chip that mimics how the brains neurons adapt in response to new information. This phenomenon, known as plasticity, is believed to underlie many brain functions, including learning and memory.


...with a single correction to the quote - where you read the word "major", you must imagine the word "minor" instead.

The word "major" is a journalistic favor/ploy done on behalf, and for the MIT researchers, so as not to discourage the public and fans ...
cont...
hush1
not rated yet Nov 17, 2011
cont...
...not to discourage the public and fans from losing their interest after the truth is told that the researchers have, in the most optimistic of all scenarios, another 100 hundred years to go.