How the brain recognizes what the eye sees

June 8, 2017, Salk Institute
The illustration on the right shows how the brain's V1 and V2 areas might use information about edges and textures to represent objects like the teddy bear on the left. Credit: Salk Institute

If you think self-driving cars can't get here soon enough, you're not alone. But programming computers to recognize objects is very technically challenging, especially since scientists don't fully understand how our own brains do it.

Now, Salk Institute researchers have analyzed how neurons in a critical part of the brain, called V2, respond to natural scenes, providing a better understanding of vision processing. The work is described in Nature Communications on June 8, 2017.

"Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also because it provides a window on how the brain works in general," says Tatyana Sharpee, an associate professor in Salk's Computational Neurobiology Laboratory and senior author of the paper. "Much of our brain is composed of a repeated computational unit, called a cortical column. In vision especially we can control inputs to the brain with exquisite precision, which makes it possible to quantitatively analyze how signals are transformed in the brain."

Although we often take the ability to see for granted, this ability derives from sets of complex mathematical transformations that we are not yet able to reproduce in a computer, according to Sharpee. In fact, more than a third of our brain is devoted exclusively to the task of parsing visual scenes.

Salk Institute researchers have analyzed how neurons in a critical part of the brain, called V2, respond to natural scenes, providing a better understanding of vision processing. The work could improve self-driving cars and point to therapies for sensory impairment. Credit: Salk Institute

Our visual perception starts in the eye with light and dark pixels. These signals are sent to the back of the brain to an area called V1 where they are transformed to correspond to edges in the visual scenes. Somehow, as a result of several subsequent transformations of this information, we then can recognize faces, cars and other objects and whether they are moving. How precisely this recognition happens is still a mystery, in part because neurons that encode objects respond in complicated ways.

Now, Sharpee and Ryan Rowekamp, a postdoctoral research associate in Sharpee's group, have developed a statistical method that takes these complex responses and describes them in interpretable ways, which could be used to help decode vision for computer-simulated vision. To develop their model, the team used publicly available data showing brain responses of primates watching movies of natural scenes (such as forest landscapes) from the Collaborative Research in Computational Neuroscience (CRCNS) database.

"We applied our new statistical technique in order to figure out what features in the movie were causing V2 neurons to change their responses," says Rowekamp. "Interestingly, we found that V2 neurons were responding to combinations of edges."

The team revealed that V2 neurons process visual information according to three principles: first, they combine edges that have similar orientations, increasing robustness of perception to small changes in the position of curves that form object boundaries. Second, if a neuron is activated by an edge of a particular orientation and position, then the orientation 90 degrees from that will be suppressive at the same location, a combination termed "cross-orientation suppression." These cross-oriented edge combinations are assembled in various ways to allow us to detect various visual shapes. The team found that cross-orientation was essential for accurate shape detection. The third principle is that relevant patterns are repeated in space in ways that can help perceive textured surfaces of trees or water and boundaries between them, as in impressionist paintings.

The researchers Tatyana Sharpee and Ryan Rowekamp. Credit: Salk Institute

The researchers incorporated the three organizing principles into a model they named the Quadratic Convolutional model, which can be applied to other sets of experimental data. Visual processing is likely to be similar to how the brain processes smells, touch or sounds, the researchers say, so the work could elucidate processing of data from these areas as well.

"Models I had worked on before this weren't entirely compatible with the data, or weren't cleanly compatible," says Rowekamp. "So it was really satisfying when the idea of combining edge recognition with sensitivity to texture started to pay off as a tool to analyze and understand complex visual data."

But the more immediate application might be to improve object-recognition algorithms for self-driving cars or other robotic devices. "It seems that every time we add elements of computation that are found in the to computer-vision algorithms, their performance improves," says Sharpee.

Explore further: How cells divide tasks and conquer work

More information: Ryan J. Rowekamp et al, Cross-orientation suppression in visual area V2, Nature Communications (2017). DOI: 10.1038/NCOMMS15739

Related Stories

How cells divide tasks and conquer work

June 7, 2017
Despite advances in neuroscience, the brain is still very much a black box—no one even knows how many different types of neurons exist. Now, a scientist from the Salk Institute has used a mathematical framework to better ...

An innovative model for the study of vision

April 11, 2017
A new study shows for the first time that the progressive processing of the visual signal underlying human object recognition is similarly implemented in the rat brain, thus extending the range of experimental techniques, ...

Blind people have brain map for 'visual' observations too

May 17, 2017
Is what you're looking at an object, a face, or a tree? When processing visual input, our brain uses different areas to recognize faces, body parts, scenes, and objects. Scientists at KU Leuven (University of Leuven), Belgium, ...

Scientists help explain visual system's remarkable ability to recognize complex objects

July 2, 2013
How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?

researchers show how vision relies on patterns of brain activity

August 4, 2016
Visual prosthetics, or bionic eyes, are soon becoming a reality, as researchers make strides in strategies to reactivate parts of the brain that process visual information in people affected by blindness.

Recommended for you

To sleep, perchance to forget

February 17, 2018
The debate in sleep science has gone on for a generation. People and other animals sicken and die if they are deprived of sleep, but why is sleep so essential?

Newborn babies who suffered stroke regain language function in opposite side of brain

February 17, 2018
It's not rare that a baby experiences a stroke around the time it is born. Birth is hard on the brain, as is the change in blood circulation from the mother to the neonate. At least 1 in 4,000 babies are affected shortly ...

Lab-grown human cerebellar cells yield clues to autism

February 16, 2018
Increasing evidence has linked autism spectrum disorder (ASD) with dysfunction of the brain's cerebellum, but the details have been unclear. In a new study, researchers at Boston Children's Hospital used stem cell technology ...

Fragile X syndrome neurons can be restored, study shows

February 16, 2018
Fragile X syndrome is the most frequent cause of intellectual disability in males, affecting one out of every 3,600 boys born. The syndrome can also cause autistic traits, such as social and communication deficits, as well ...

Brain-machine interface study suggests how brains prepare for action

February 16, 2018
Somewhere right now in Pyeongchang, South Korea, an Olympic skier is thinking through the twists and spins she'll make in the aerial competition, a speed skater is visualizing how he'll sneak past a competitor on the inside ...

Humans blink strategically in response to environmental demands

February 16, 2018
If a brief event in our surroundings is about to happen, it is probably better not to blink during that moment. A team of researchers at the Centre for Cognitive Science from Technische Universität Darmstadt published a ...

3 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
not rated yet Jun 09, 2017
But the more immediate application might be to improve object-recognition algorithms for self-driving cars or other robotic devices. "It seems that every time we add elements of computation that are found in the brain to computer-vision algorithms, their performance improves," says Sharpee.


Duh.

But the more immediate problem is that they don't have the computational capacity in a self-driving car or a robot to facilitate the kind of calculations they wish to perform, which inherently limits the performance of the systems.

People use a third of the brain to detect visual objects. One might argue that it doesn't take that much just to drive, but I would say it does because you have to understand a whole lot more about your surroundings than just the obvious items. In the past, when we used creatures with lesser brains as "self-driving" vehicles - we actually had to put blinders on them so they wouldn't get confused and run away, and yet they still did.
Eikka
not rated yet Jun 09, 2017
Or to put it in other words, a chimp has the brains to see everything we see, but not the brains to understand what we understand of it, which is why we won't let a chimp drive a limousine no matter how well it has been trained. We know it will make many mistakes because it does not understand the subtleties of its environment.

The same problem applies to the robots, except we are apparently willing to let them drive even though they understand a million times less than the chimp.
rebeccafr2
not rated yet Jun 11, 2017
The same problem applies to the robots, except we are apparently willing to let them drive even though they understand a million times less than the chimp.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.