How the brain sees the world in 3-D

March 21, 2017 by Jeff Grabmeier, The Ohio State University
Researcher Nonie Finlayson about to enter the fMRI scanner. Credit: Ohio State University

We live in a three-dimensional world, but everything we see is first recorded on our retinas in only two dimensions.

So how does the brain represent 3-D information? In a new study, researchers for the first time have shown how different parts of the brain represent an object's in depth compared to its 2-D location.

Researchers at The Ohio State University had volunteers view simple images with 3-D glasses while they were in a imaging (fMRI) scanner. The fMRI showed what was happening in the participants' brains while they looked at the three-dimensional images.

The results showed that as an image first enters our visual cortex, the brain mostly codes the two dimensional location. But as the processing continues, the emphasis shifts to decoding the depth information as well.

"As we move to later and later visual areas, the representations care more and more about depth in addition to 2-D location. It's as if the representations are being gradually inflated from flat to 3-D," said Julie Golomb, senior author of the study and assistant professor of psychology at Ohio State.

"The results are surprising because a lot of people assumed we might find depth information in early visual areas. What we found is that even though there might be individual neurons that have some depth information, they don't seem to be organized into any map or pattern for 3-D space perception."

Golomb said many scientists have investigated where and how the brain decodes two-dimensional information. Others had looked at how the brain perceives depth. Researchers have found that depth information must be inferred in our brain by comparing the slightly different views from the two eyes (what is called binocular disparity) or from other visual cues.

But this is the first study to directly compare both 2-D and depth information at one time to see how 3-D representations (2-D plus depth) emerge and interact in the brain, she said.

The study was led by Nonie Finlayson, a former postdoctoral researcher at Ohio State, who is now at University College London. Golomb and Xiaoli Zhang, a graduate student at Ohio State, are the other co-authors. The study was published recently in the journal NeuroImage.

Participants in the study viewed a screen in the fMRI while wearing 3-D glasses. They were told to focus on a dot in the middle of the screen. While they were watching the dot, objects would appear in different peripheral locations: to the left, right, top, or bottom of the dot (horizontal and vertical dimensions). Each object would also appear to be at a different depth relative to the dot: behind or in front (visible to participants wearing the 3-D glasses).

The fMRI data allowed the researchers to see what was happening in the brains of the participants when the various objects appeared on the screen. In this way, the scientists could compare how activity patterns in the visual cortex differed when participants saw objects in different locations.

"The pattern of activity we saw in the early visual cortex allowed us to tell if someone was seeing an object that was to the left, right, above or below the fixation dot," Golomb said. "But we couldn't tell from the early visual cortex if they were seeing something in front of or behind the dot.

"In the later areas of , there was a bit less about the objects' two dimensional locations. But the tradeoff was that we could also decode what position they were perceiving in depth."

Golomb said future studies will look to more closely quantify and model the nature of three-dimensional visual representations in the .

"This is an important step in understanding how we perceive our rich three-dimensional environment," she said.

Explore further: Neuronal activity in the visual cortex controlled by both where the eyes are looking and what they see

Related Stories

Neuronal activity in the visual cortex controlled by both where the eyes are looking and what they see

September 20, 2013
Even though our eyes are constantly moving, the brain perceives the external world as stationary—a feat achieved by integrating images acquired by the retina with information about the direction of the gaze. An international ...

How weight, mass, and gravity are represented in the brain

July 25, 2014
Humans have developed sophisticated concepts like mass and gravity to explain a wide range of everyday phenomena, but scientists have remarkably little understanding of how such concepts are represented by the brain.

Researchers pinpoint where the brain unites our eyes' double vision

July 23, 2015
If you have two working eyes, you are live streaming two images of the world into your brain. Your brain combines the two to produce a view of the world that appears as though you had a single eye—like the Cyclops from ...

Study helps fill in gaps in our visual perception

January 21, 2016
A Dartmouth College study sheds light on how the brain fills in the gaps of how we visually perceive the world around us.

Results challenge conventional wisdom about where the brain processes visual information

March 2, 2015
Neuroscientists generally think of the front end of the human visual system as a simple light detection system: The patterns produced when light falls on the retina are relayed to the visual cortex at the rear of the brain, ...

Recommended for you

Forty percent of people have a fictional first memory, says study

July 17, 2018
Researchers have conducted one of the largest surveys of people's first memories, finding that nearly 40 per cent of people had a first memory which is fictional.

Protein found to be key component in irregularly excited brain cells

July 17, 2018
In a new study in mice, researchers have identified a key protein involved in the irregular brain cell activity seen in autism spectrum disorders and epilepsy. The protein, p53, is well-known in cancer biology as a tumor ...

Insight without incision: Advances in noninvasive brain imaging offers improvements to epilepsy surgery

July 17, 2018
About a third of epilepsy sufferers require treatment through surgery. To check for severe epilepsy, clinicians use a surgical procedure called electrocorticography (ECoG). An ECoG maps a section of brain tissue to help clinicians ...

New drug target for remyelination in MS is identified

July 17, 2018
Remyelination, the spontaneous regeneration of the fatty insulator in the brain that keeps neurons communicating, has long been seen as crucial to the next big advance in treating multiple sclerosis (MS). However, a lack ...

Artificial neural networks now able to help reveal a brain's structure

July 17, 2018
The function of the brain is based on the connections between nerve cells. In order to map these connections and to create the connectome, the "wiring diagram" of a brain, neurobiologists capture images of the brain with ...

Convergence of synaptic signals is mediated by a protein critical for learning and memory

July 16, 2018
Inside the brain, is a complex symphony of perfectly coordinated signaling. Hundreds of different molecules amplify, modify and carry information from tiny synaptic compartments all the way through the entire length of a ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

RobertKarlStonjek
not rated yet Mar 21, 2017
Parallax, the difference between the images on each of the eyes, only gives depth information to around 3~5 meters. Most of the depth information our brains interpret comes from other cues including focal length, motion in depth (distant objects appear to move slower), relative size (distant objects appear smaller), and other cues which may or may not have been included in the experiment.

Most of the depth cues must be learned and are not innate which is why people who gain sight as adults (eg congenital cataracts removed) are unable to determine depth or distance to objects.

The assumption that depth is exclusively or mainly determined from stereo vision alone is pervasive and entirely unwarranted. One eyed pilots, for instance, have never been restricted and there are up to 10% of such pilots in the USA alone including those flying by visual only rules and include commercial airline pilots. The other depth cues are sufficient for them to function normally.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.