This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Scientists use machine learning to 'see' how the brain adapts to different environments

Scientists use machine learning to 'see' how the brain adapts to different environments
Resolving AMPAR clusters at individual excitatory synapses in vivo. a, CRISPR-based transgenic labeling of the GluA2 AMPA receptor subunit with a pH-dependent fluorescent tag (SEP) enables in vivo visualization of endogenous GluA2-containing synapses. b, Single high-resolution imaging plane from fixed-slice tissue with endogenous fluorescence, acquired using Airyscan detectors. Magenta, tdT; green, SEP–GluA2. Arrows mark examples of SEP–GluA2/spine overlap. Colored arrows show the same synapse across image channels. Data are representative of three SEP–GluA2 mice examined over one independent experiment. c, Tradeoffs of different imaging modalities. df, Example xy slice (top) and xz slice (bottom) of different imaging modalities. Scale bar, 5 µm in xy and z. Data are representative of three SEP–GluA2 mice, each imaged with all three microscopy modalities in three independent trials. gi, Diagrams of training, validation and application workflow. Representative images of single xy plane of each color-coded imaging modality (left). Workflow of training, validation or application (right). Insets are representative images of tissue from six SEP–GluA2 mice, examined over three independent experiments. CNN was trained using 1p confocal images from acute slices of SEP–GluA2 tissue (xi). CNN output (yo) was compared to ground truth (high-resolution Airyscan imaging of the same tissue, yi) to improve network performance (g). Network output was validated by comparing to ground truth and annotations by expert humans, enabling quantification of error rates (h). Trained restoration CNN was applied to in vivo 2p images, restoring optimal ‘Airyscan-like’ resolution to in vivo imaging volumes (i). xv, slice 2p data; yv, XTC Restored slice 2p data; xa, raw in vivo 2p data; ya, XTC Restored in vivo 2p data. j, Pipeline for longitudinal tracking of fluorescently labeled SEP–GluA2 synapses in vivo. Daily imaging volumes were aligned using pairwise affine registration, followed by slice-by-slice pairwise affine registration to compensate for depth-dependent local tissue shift. Registered volumes were restored with XTC. Individual synapses were segmented with an ilastik-trained random forest model, followed by watershed to separate adjacent objects. Finally, a tracker trained through structured learning was used to longitudinally track synapses. t indicates current timepoint; n indicates number of subsequent timepoints. Credit: Nature Methods (2023). DOI: 10.1038/s41592-023-01871-6

Johns Hopkins scientists have developed a method involving artificial intelligence to visualize and track changes in the strength of synapses—the connection points through which nerve cells in the brain communicate—in live animals. The technique, described in Nature Methods, should lead, the scientists say, to a better understanding of how such connections in human brains change with learning, aging, injury and disease.

"If you want to learn more about how an orchestra plays, you have to watch individual players over time, and this new method does that for in the brains of living animals," says Dwight Bergles, Ph.D., the Diana Sylvestre and Charles Homcy Professor in the Solomon H. Snyder Department of Neuroscience at the Johns Hopkins University (JHU) School of Medicine.

Bergles co-authored the study with colleagues Adam Charles, Ph.D., M.E., and Jeremias Sulam, Ph.D., both assistant professors in the biomedical engineering department, and Richard Huganir, Ph.D., Bloomberg Distinguished Professor at JHU and Director of the Solomon H. Snyder Department of Neuroscience. All four researchers are members of Johns Hopkins' Kavli Neuroscience Discovery Institute.

Nerve cells transfer information from one cell to another by exchanging chemical messages at synapses ("junctions"). In the brain, the authors explain, different life experiences, such as exposure to new environments and learning skills, are thought to induce changes at synapses, strengthening or weakening these connections to allow learning and memory.

Understanding how these minute changes occur across the trillions of synapses in our brains is a daunting challenge, but it is central to uncovering how the brain works when healthy and how it is altered by disease.

To determine which synapses change during a particular life event, scientists have long sought better ways to visualize the shifting chemistry of synaptic messaging, necessitated by the high density of synapses in the brain and their small size—traits that make them extremely hard to visualize even with new state-of-the-art microscopes.

"We needed to go from challenging, blurry, noisy imaging data to extract the signal portions we need to see," Charles says.

To do so, Bergles, Sulam, Charles, Huganir and their colleagues turned to , a computational framework that allows flexible development of automatic data processing tools.

Machine learning has been successfully applied to many domains across biomedical imaging, and in this case, the scientists leveraged the approach to enhance the quality of images composed of thousands of synapses. Although it can be a powerful tool for automated detection, greatly surpassing human speeds, the system must first be "trained," teaching the algorithm what high quality images of synapses should look like.

In these experiments, the researchers worked with genetically altered mice in which glutamate receptors—the chemical sensors at synapses—glowed green (fluoresced) when exposed to light. Because each receptor emits the same amount of light, the amount of fluorescence generated by a synapse in these mice is an indication of the number of synapses, and therefore its strength.

As expected, imaging in the intact brain produced low quality pictures in which individual clusters of glutamate receptors at synapses were difficult to see clearly, let alone to be individually detected and tracked over time. To convert these into higher quality images, the scientists trained a machine learning algorithm with images taken of brain slices (ex vivo) derived from the same type of genetically altered mice.

Because these images weren't from living animals, it was possible to produce much higher quality images using a different microscopy technique, as well as low quality images—similar to those taken in —of the same views.

This cross-modality data collection framework enabled the team to develop an enhancement algorithm that can produce higher resolution images from low quality ones, similar to the images collected from living mice. In this way, data collected from the intact brain can be significantly enhanced and able to detect and track individual synapses (in the thousands) during multiday experiments.

To follow changes in receptors over time in living mice, the researchers then used microscopy to take repeated images of the same synapses in mice over several weeks. After capturing baseline images, the team placed the animals in a chamber with new sights, smells and tactile stimulation for a single five-minute period. They then imaged the same area of the every other day to see if and how the new stimuli had affected the number of glutamate receptors at synapses.

Although the focus of the work was on developing a set of methods to analyze synapse level changes in many different contexts, the researchers found that this simple change in environment caused a spectrum of alterations in fluorescence across synapses in the cerebral cortex, indicating connections where the strength increased and others where it decreased, with a bias toward strengthening in animals exposed to the novel environment.

The studies were enabled through close collaboration among scientists with distinct expertise, ranging from molecular biology to artificial intelligence, who don't normally work closely together. But such collaboration, is encouraged at the cross disciplinary Kavli Neuroscience Discovery Institute, Bergles says.

The researchers are now using this machine learning approach to study synaptic changes in animal models of Alzheimer's disease, and they believe the method could shed new light on synaptic changes that occur in other disease and injury contexts.

"We are really excited to see how and where the rest of the scientific community will take this," Sulam says.

More information: Yu Kang T. Xu et al, Cross-modality supervised image restoration enables nanoscale tracking of synaptic plasticity in living mice, Nature Methods (2023). DOI: 10.1038/s41592-023-01871-6

Journal information: Nature Methods
Citation: Scientists use machine learning to 'see' how the brain adapts to different environments (2023, June 5) retrieved 29 March 2024 from https://medicalxpress.com/news/2023-06-scientists-machine-brain-environments.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Scientists create mind-blowing tool to 'see' millions of brain cell connections in mice

96 shares

Feedback to editors