Human brain recalls visual features in reverse order than it detects them

October 9, 2017
Visual depiction of one- and two-line tasks that participants were asked to complete and that was key to the paper's findings. Credit: Ning Qian/Columbia's Zuckerman Institute

Scientists at Columbia's Zuckerman Institute have contributed to solving a paradox of perception, literally upending models of how the brain constructs interpretations of the outside world. When observing a scene, the brain first processes details—spots, lines and simple shapes—and uses that information to build internal representations of more complex objects, like cars and people. But when recalling that information, the brain remembers those larger concepts first to then reconstruct the details—representing a reverse order of processing. The research, which involved people and employed mathematical modeling, could shed light on phenomena ranging from eyewitness testimony to stereotyping to autism.

This study was published today in Proceedings of the National Academy of Sciences.

"The order by which the brain reacts to, or encodes, information about the outside world is very well understood," said Ning Qian, PhD, a neuroscientist and a principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute. "Encoding always goes from simple things to the more complex. But recalling, or , that information is trickier to understand, in large part because there was no method—aside from mathematical modeling—to relate the activity of brain cells to a person's perceptual judgment."

Without any direct evidence, researchers have long assumed that decoding follows the same hierarchy as encoding: you start from the ground up, building up from the details. The main contribution of this work with Misha Tsodyks, PhD, the paper's co-senior author who performed this work while at Columbia and is at the Weizmann Institute of Science in Israel, "is to show that this standard notion is wrong," Dr. Qian said. "Decoding actually goes backward, from high levels to low."

As an analogy of this reversed decoding, Dr. Qian cites last year's presidential election as an example.

"As you observed the things one candidate said and did over time, you may have formed a categorical negative or positive impression of that person. From that moment forward, the way in which you recalled the candidate's words and actions are colored by that overall impression," said Dr. Qian. "Our findings revealed that higher-level categorical decisions—'this candidate is trustworthy'—tend to be stable. But lower-level memories—'this candidate said this or that'—are not as reliable. Consequently, high-level decoding constrains low-level decoding."

To explore this decoding hierarchy, Drs. Qian and Tsodyks and their team conducted an experiment that was simple in design in order to have a clear interpretation of the results. They asked 12 people to perform a series of similar tasks. In the first, they viewed a line angled at 50 degrees on a computer screen for half a second. Once it disappeared, the participants repositioned two dots on the screen to match what they remembered to be the angle of the line. They then repeated this task 50 more times. In a second task, the researchers changed the angle of the line to 53 degrees. And in a third task, the participants were shown both lines at the same time, and then had to orient pairs of dots to match each angle.

Previously held models of decoding predicted that in the two-line task, people would first decode the individual angle of each line (a lower-level feature) and the use that information to decode the two lines' relationship (a higher-level feature).

"Memories of exact angles are usually imprecise, which we confirmed during the first set of one-line tasks. So, in the two-line task, traditional models predicted that the angle of the 50-degree line would frequently be reported as greater than the angle of the 53-degree line," said Dr. Qian.

But that is not what happened. Traditional models also failed to explain several other aspects of the data, which revealed bi-directional interactions between the way participants recalled the angle of the two lines. The brain appeared to encode one line, then the other, and finally encode their relative orientation. But during decoding, when participants were asked to report the individual angle of each line, their brains used that the lines' relationship—which angle is greater— to estimate the two individual angles.

"This was striking evidence of participants employing this reverse decoding method," said Dr. Qian.

The authors argue that reverse decoding makes sense, because context is more important than details. Looking at a face, you want to assess quickly if someone is frowning, and only later, if need be, estimate the exact angles of the eyebrows. "Even your daily experience shows that perception seems to go from high to low levels," Dr. Qian added.

To lend further support, the authors then constructed a mathematical model of what they think happens in the brain. They used something called Bayesian inference, a statistical method of estimating probability based on prior assumptions. Unlike typical Bayesian models, however, this new model used the higher-level features as the prior information for decoding lower-level features. Going back to the visual line task, they developed an equation to estimate individual lines' angles based on the lines' relationship. The model's predictions fit the behavioral data well.

In the future, the researchers plan to extend their work beyond these simple tasks of perception and into studies of long-term memory, which could have broad implications—from how we assess a presidential candidate, to if a witness is offering reliable testimony.

"The work will help to explain the 's underlying cognitive processes that we employ every day," said Dr. Qian. "It might also help to explain complex disorders of cognition, such as autism, where people tend to overly focus on details while missing important context."

This paper is titled: "Visual perception as retrospective Bayesian decoding from high- to low-level features."

Explore further: People are found to be inefficient when searching for things

More information: Stephanie Ding el al., "Visual perception as retrospective Bayesian decoding from high- to low-level features," PNAS (2017).

Related Stories

People are found to be inefficient when searching for things

February 15, 2017
(Medical Xpress)—A trio of researchers at the University of Aberdeen in the U.K. has found that when people scan areas looking for something in particular, they tend to do so in a very inefficient manner. In their paper ...

What's in a simple line drawing? Quite a lot, our brains say

May 16, 2011
A new study using sophisticated brain scans shows how simple line drawings can capture the essence of a beach or a mountain for viewers just as well as a photograph would.

In witnessing the brain's 'aha!' moment, scientists shed light on biology of consciousness

July 27, 2017
Columbia scientists have identified the brain's 'aha!' moment—that flash in time when you suddenly become aware of information, such as knowing the answer to a difficult question. Today's findings in humans, combined with ...

How the brain sees the world in 3-D

March 21, 2017
We live in a three-dimensional world, but everything we see is first recorded on our retinas in only two dimensions.

Take a look, and you'll see, into your imagination

May 31, 2017
Scanning your brain to decode the contents of your mind has been a subject of intense research interest for some time. As studies have progressed, scientists have gradually been able to interpret what test subjects see, remember, ...

Recommended for you

Nature or nurture? Innate social behaviors in the mouse brain

October 18, 2017
Adult male mice have a simple repertoire of innate, or instinctive, social behaviors: When encountering a female, a male mouse will try to mate with it, and when encountering another male, the mouse will attack. The animals ...

Brain activity predicts crowdfunding outcomes better than self-reports

October 18, 2017
Surveys and self-reports are a time-honored way of trying to predict consumer behavior, but they have limitations. People often give socially desirable answers or they simply don't know or remember things clearly.

Navigational view of the brain thanks to powerful X-rays

October 18, 2017
If brain imaging could be compared to Google Earth, neuroscientists would already have a pretty good "satellite view" of the brain, and a great "street view" of neuron details. But navigating how the brain computes is arguably ...

'Wasabi receptor' for pain discovered in flatworms

October 18, 2017
A Northwestern University research team has discovered how scalding heat and tissue injury activate an ancient "pain" receptor in simple animals. The findings could lead to new strategies for analgesic drug design for the ...

Changing stroke definitions is causing chaos, warns professor

October 18, 2017
Proposals to change the definitions of stroke and related conditions are causing confusion and chaos in clinical practice and research, a Monash University associate professor has warned.

Brain-machine interfaces to treat neurological disease

October 18, 2017
Since the 19th century at least, humans have wondered what could be accomplished by linking our brains – smart and flexible but prone to disease and disarray – directly to technology in all its cold, hard precision. Writers ...


Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Oct 10, 2017
The article mentions geometric properties of a memory but failed to realise the temporal aspect which, like the geometric, also has a top-down aspect whereby related events are pooled and the actual order is a detail that may or may not be recalled after more intense scrutiny of one's memory.
not rated yet Oct 10, 2017
One face can have many different component lines and curves, so the face is not known by the components but by the composite form. This is hierarchical so that a face can be made up of many different physical (geometrical) components and a familiar (recognised) person may have many different faces, for instance with and without makeup, at different ages and so on.

It is also notable that dreams are remembered both backward and in the top-down manner mentioned in the text above. We always remember the end of a dream but rarely the beginning and it is the end that we remember first. We also do not remember details that in the real world would underpin reality, like colour, the feeling of footsteps when we are walking, the sound of voices and so on *unless* they are relevant to the dream.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.