This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

How AI can help uncover the way memory works

How AI can help uncover the way memory works
Credit: AI-generated image (disclaimer)

Over the past few years, artificial intelligence—or AI—has started to revolutionize the world as we know it: some people now ask AI-based chatbots to write essays and summarize documents, others use AI-powered virtual assistants to send messages and control smart-home devices, others leverage the technology for drug discovery and development. Computational neuroscientist Friedemann Zenke uses AI to interrogate how the brain works.

In a study published in Nature Neuroscience, researchers led by Zenke—a leader at the FMI—investigated how specific groups of neurons adjust their connections in response to external stimuli. The work could help neuroscientists to understand how , which carry information about changes in the environment, make sense of the external world.

Zenke and his colleagues use mathematical tools and theories to study how networks of neurons in the work together to learn and store memories. By developing approaches to deal with the complexity of the human brain, Zenke's team creates AI-based models of networks of neurons that can tell us useful things about the real organ.

"Everybody has a brain, yet we don't really understand how it functions," Zenke says. "Utimately, our goal is to acquire some form of understanding—that's because before we can get to disease, we have to understand how the healthy system works."

Models of the mind

Zenke had an early glimpse of what a career in science would be like. His father, who is a cell biologist, introduced him early on to the environment of a biomedical lab. During weekends and school holidays, Zenke used to join his dad at work. "I fondly remember putting my finger into the Vortex as a child," Zenke says. "But even at the time, I was most fascinated by the computers in the lab."

Zenke eventually set out to study physics, and in the late 2000 he went on to work in the branch of physical science that investigates the fundamental building blocks of matter. Although Zenke found the field fascinating, the timescales of the experiments were too long. He hoped that his research could have a more immediate impact.

The blossoming field of computational neuroscience, he realized, was prompting warp-speed advances in our understanding of the brain. "That's what made me switch," he says. Another aspect that drew Zenke to neuroscience is its intricacy. "It's probably one of the most complex research topics at the moment, and it requires a diverse approach."

After setting up his own group at the FMI, Zenke set out to study how individual neurons contribute to the formation of memories—a process that plays a vital role in learning, problem-solving and personal identity. When we see someone for the first time, for example, the brain activates specific groups of neurons, resulting in a unique pattern of neuronal activity that helps create a memory.

But the only information that an individual neuron has about the external world is in the form of electrical spikes that it receives from—and then transmits to—other neurons. "How does a single neuron contribute to this computation to memory and recognition of, for instance, that someone you met?" Zenke says.

Researchers in his group address this question using diverse approaches from mathematics, computer science and physics. Memories are made by changes in groups of neurons and the connections, or synapses, between them. So, the researchers simulate these groups of neurons, or , in the computer. Then, they use approaches from physics to get a theoretical understanding of what's happening in the networks. "Physics brings the power of abstraction—trying to boil down a problem to the bare minimum, the simplest parts that you can understand," Zenke says.

But in the brain, hundreds or thousands of neurons interact to form memories, and sometimes purely analytic approaches are not enough to understand how these cells compute information. That's when the researchers turn to machine learning methods to generate large-scale simulations. One such technology is , which has been used in many recent advances, including autonomous driving.

Deep learning is based on neural networks that mimic the information processing of the human brain, allowing it to "learn" from large amounts of data. "A neural network per se doesn't do anything useful, it only starts doing something useful when you train it with an algorithm," Zenke says.

As the algorithm "feeds" data to the neural network, the connections within the network change, leading to a more complex model. Such neural network models allow computational neuroscientists to explore questions about how the brain works, similar to what biologists do with living animals.

In-silico neuronal circuits

If researchers can design neural network models that perform similarly to the brain, that may offer an explanation for how the real organ computes information and stores memories, Zenke says.

Over the past few years, his team has developed mathematical descriptions of how synapses change through experience. The researchers trained a spiking neural network, which mimics the electrical spikes that neurons use to communicate with each other, and found that this network has some remarkable similarities to the workings of real brains.

For example, experiments in animal models have shown that the proper balance of excitatory and inhibitory electric signals enables neurons to be active in some circumstances and muted in others. When Zenke's team trained the spiking to perform a specific task—for example, recognize spoken words from a sentence—the artificial neurons in the network developed a balance between excitatory and inhibitory inputs, without being told to do so. "That's where the circle closes: The model reaches a balance that we can find in biology," he says.

In their latest study, Zenke and his colleagues asked how sensory networks represent the external world in neuronal activity. Sensory networks in the brain typically update their connections in response to , but don't—unless specific data are fed into the algorithms to predict outcomes. The researchers found a simple solution to this problem by tweaking some of learning rules that help the artificial network learn from existing conditions.

Previous learning rules were derived from , but they were missing one fundamental aspect: prediction. So, Zenke's team developed learning rules that try to predict future sensory inputs for each neuron. "That's the key ingredient that seems to change everything about what these networks can do," Zenke says. The findings could help neuroscientists to make sense of many experimental results obtained in animal models.

In the future, Zenke plans to create larger networks from connected neural circuits—a design principle used by the brain—to investigate how the real organs model the outside world, for example to take decisions or to evaluate other people's actions.

Combining neural circuits into large networks will give artificial models a rudimentary form of behavior, which would allow Zenke to compare artificial models to experimental findings by other researchers, including neurobiologists at the FMI working with animal models. The predictions generated by AI-powered models could also be tested in living animals, providing neuroscientists with extra tools for exploring how the brain works and encouraging breakthroughs that would otherwise take decades.

"At the FMI and in Basel, we have an excellent circuit neuroscience community that provides a vibrant, collaborative atmosphere," Zenke says. "It's a fantastic place to do this type of research."

More information: Manu Srinath Halvagal et al, The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks, Nature Neuroscience (2023). DOI: 10.1038/s41593-023-01460-y

Journal information: Nature Neuroscience
Citation: How AI can help uncover the way memory works (2023, October 17) retrieved 30 April 2024 from https://medicalxpress.com/news/2023-10-ai-uncover-memory.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

How neurons that wire together fire together

60 shares

Feedback to editors