Credit: Krysten Merriman on Unsplash

To adapt to their environment and learn from past experiences, animals need to learn to associate stimuli in their environment (e.g., a particular sound or scent) and the rewards or threats that these stimuli are likely to carry with them (e.g., food or the presence of a predator). Past studies rooted in psychology and neuroscience have explored how animals start making associations between environmental stimuli at length, introducing the idea of 'conditioning."

Dopaminergic neurons (DANs), which are the primary source of neurotransmitter dopamine in the mammalian brain, have been consistently found to play a crucial role in how animals are conditioned to associate different stimuli with pleasure or pain. While DANs are now known to play a key part in how animals learn to make associations between different environmental stimuli over time, the upstream circuits that regulate their activity in the brain are still poorly understood.

Researchers at the University of Cambridge have recently carried out a study aimed at better understanding the upstream circuitry of DANs involved in . Their paper, published in Nature Neuroscience, introduces a new model that can be used to test hypotheses about the role of different circuit motifs in associative learning, specifically applying it to an insect species called drosophila larva.

"All individuals learn in different ways and we are curious about the ways in which learning itself could be regulated," Marta Zlatic, one of the researchers who carried out the study, told Medical Xpress. "Since are the ones that provide the teaching signals for learning we wanted to understand how their activity is regulated."

Zlatic and her colleagues set out to identify the DAN upstream circuitry involved in associative learning by systematically identifying all the neurons that synapse onto DANs. In their study, they used a technique known as electron microscopy reconstruction to examine the body of an insect called drosophila larva.

Their work draws inspiration from , a renowned theoretical construct in neuroscience and psychology, which is now also being used to train computational models. Reinforcement learning theory suggests that dopamine neurons encode errors between predicted and actual outcomes.

"If reinforcement learning theory is true, we would expect to find two kinds of inputs onto dopamine neurons: feedback from the output neurons of the learning centre that might encode predicted outcomes and feedforward inputs from sensory systems that encode rewards and punishments," Zlatic said. "In our study, we did indeed observe both of these types of inputs."

Using electron microscopy reconstruction, Zlatic and her colleagues comprehensively identified all the neurons that make direct synaptic connections onto DANs in drosophila larvae. This technique entails imaging the brain using a high-resolution electron microscope, allowing researchers to observe neurons and individual connections between them, to then trace each neuron, its connections and other neurons connected to it.

"We collaborated with Ashok Litwin-Kumar, who developed a model of the circuit constrained by the wiring diagram," Zlatic explained. "Using this model, we could then 'perform experiments' that would take too long in vivo and test many hypotheses about possible roles of different kinds of identified circuit motifs."

After painting an exhaustive picture of neurons that make direct synaptic connections onto DANs in Drosophila larvae, the researchers used a model devised by Litwin Kumar at Columbia University to test a series of hypotheses about the role that different neural circuits may play in associative learning. This allowed them to unveil specific types of circuit motifs that could enhance the power of the animal's overall learning circuit.

"We provide a complete synaptic-resolution connectome of a recurrent learning circuit in an animal," Zlatic said. "The circuitry shows that learning is heavily regulated by prior learning and hence no two individuals can learn in the same way. This could explain why learning varies so much between individuals."

The model devised by Zlatic and her colleagues allowed them to identify types of newly discovered feedback motifs that could enhance the performance and flexibility of an associative learning circuit on complex learning tasks. This ultimately offers clues about the way in which animals may learn while completing different tasks, which could also serve as an inspiration for future computational techniques.

"Many models suggest that dopamine neurons encode a uniform scalar prediction error signal," Zlatic said. "We found that while dopamine receive a huge amount of feedback from outputs of the learning center each dopamine neuron receives a unique pattern of feedback and therefore likely encodes a slightly different feature,"

The recent study carried out by Zlatic and her colleagues provides new valuable insight about the upstream neural that regulate the activity of DANs and thus play a crucial role in the associative leaning of insects, and potentially also of other animals. The findings collected by these researchers could serve as the basis for new studies exploring the neural mechanisms that drive learning in animals, yet they could also inspire the development of new bio-inspired machine learning algorithms or other computational techniques.

"Our model generates testable predictions about the roles of identify circuit motifs in different types of complex learning tasks," Zlatic added. "Our plan is to develop more complex learning tasks in the larva and manipulate individual circuit motifs using excellent genetic tools to test the predictions of the models."

More information: Claire Eschbach et al. Recurrent architecture for adaptive regulation of learning in the insect brain, Nature Neuroscience (2020). DOI: 10.1038/s41593-020-0607-9

Journal information: Nature Neuroscience