This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

proofread

How our brain deconstructs a world in constant motion

How our brain deconstructs a world in constant motion
Was it a stop sign? I didn’t notice. Credit: US Marine Corps, via Wikimedia Commons

It's a miracle that people aren't constantly getting into car accidents.

Whizzing by at 65 miles per hour in a car, the brain rapidly decodes millions of photons worth of information from the eyes, and then must use that information to instantly figure out where it is and where it needs to go. Is that a pedestrian approaching the sidewalk or a mailbox? Do I need to take this offramp or the next one? What color is the traffic light up ahead?

Most motorists, miraculously, get to work or school without a scratch.

After nearly a decade worth of research, Duke scientists have figured out how the brain juggles all of this so effortlessly and tirelessly in a surprisingly inefficient way: by making quick, low-level models of the world to help form a clear view of the road ahead. The new findings expand the understanding of how the brain sees the world, and might one day help clinicians better understand what goes awry in people with psychiatric issues defined by perceptual problems, like schizophrenia.

Most neuroscientists think our figure out what we're looking at by quickly comparing what's in front of us to past experience and . Like a biological detective, they might determine you are looking at a house by using past experiences of neighborhoods you have been in and houses you have lived in. Enthusiasts of this Bayesian theory have long reasoned that these quick, probability-based analyses are what help people see a stable world despite sensory and motor noise from eye movement and constant environmental uncertainties, like a glare from the sun or a backdrop of a moving crowd.

A recent paper in the online journal eNeuro, however, suggests neuroscientists have overlooked a simpler explanation: that brain cells are also rapidly decoding a constant stream of information from the eyes using simple pattern recognition, like determining you're looking at a house from the visual evidence of windows, a tall rectangular opening, and a manicured lawn.

"That discriminative model has some advantages because it's really quick, logical, and flexible," said Marc Sommer, Ph.D., a professor of biomedical engineering at Duke and senior author of the new study. "You can learn the boundaries between decisions, and you can apply all sorts of statistical pattern-matching at a very low level. You don't have to create a model of the world, which is a big task for a brain."

Sommer initially hoped to confirm the general consensus in neuroscience—that the brain builds on a working model of the world instead of recognizing patterns from the ground up. But after putting the Bayesian theory to the test with Duke neurobiology alumna Divya Subramanian, Ph.D., now a postdoctoral researcher at the National Institutes for Health, he's hoping to extend their newfound results to other processes in the brain.

To ferret out which theory would hold up, Sommer and Subramanian recruited 45 adults for an . Participants looked at a and were quizzed about where a shape on the screen moved to, or if it moved at all. Throughout the test, Subramanian subtly made movements trickier and less obvious to tease out how the brain compensates when there is increasing uncertainty, from changing the contrast of the shape to the shape itself.

After scoring the eye exams, Sommer and Subramanian were surprised to find that the brain didn't solely rely on a Bayesian approach.

People scored worse when the visual noise was dialed up, but only when they were asked where the target moved to. Test scores were mostly unaffected with noisier scenes when people were asked if a shape moved on the screen, suggesting that—to the team's surprise—people don't always use prior experiences when they are more uncertain about what they are seeing, like our biological detective would.

The team spent the next several years parsing through results and replicating their findings "three times to believe it," Subramanian said, but it always led them to the same conclusion: for some forms of perception, brain cells stick to low-level patterns to draw conclusions about the world around them.

"You can collect data forever and ever. And at some point, you just realize you have enough," Sommer said.

Sommer now plans to disrupt the dogma for other , like spoken language, to see if beloved theories hold up to the scrutiny of testing.

The hope is that by understanding how the brain solves other perceptual problems, Sommer and others can better understand psychiatric and motor disorders, like Parkinson's disease, schizophrenia, or , and develop more effective treatments as a result.

"There are some sub-circuits of the that are probably pretty well-understood to be involved with these disorders. That's a biological description," Sommer said. "And there's also neurotransmitter deficits, like lacking dopamine in Parkinson's. That's a chemical explanation. But there are very few big-picture, explanations of why people have certain psychiatric or motor disorders."

More information: Divya Subramanian et al, Bayesian and Discriminative Models for Active Visual Perception across Saccades, eNeuro (2023). DOI: 10.1523/ENEURO.0403-22.2023

Journal information: eNeuro
Provided by Duke Research Blog
Citation: How our brain deconstructs a world in constant motion (2023, October 25) retrieved 27 April 2024 from https://medicalxpress.com/news/2023-10-brain-deconstructs-world-constant-motion.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Neural model shows evolution wired human brains to act like supercomputers

0 shares

Feedback to editors