Brain's vision secrets unraveled

February 3, 2013

A new study led by scientists at the Universities of York and Bradford has identified the two areas of the brain responsible for our perception of orientation and shape.

Using sophisticated at York Centre (YNiC), the research found that the two of the —each about the size of a 5p coin and known as human visual field maps—process the different types of independently.

The scientists, from the Department of Psychology at York and the Bradford School of Optometry & Vision Science established how the two areas worked by subjecting them to magnetic fields for a short period which disrupted their normal brain activity. The research which is reported in Nature Neuroscience represents an important step forward in understanding how the brain processes visual information.

Attention now switches to a further four areas of the extra-striate cortex which are also responsible for visual function but whose specific individual roles are unknown.

The study was designed by Professor Tony Morland, of York's Department of Psychology and the Hull York Medical School, and Dr Declan McKeefry, of the Bradford School of Optometry and Vision Science at the University of Bradford. It was undertaken as part of a PhD by Edward Silson at York.

Researchers used functional magnetic resonance imaging (fMRI) equipment at YNiC to pinpoint the two brain areas, which they subsequently targeted with magnetic fields that temporarily disrupt neural activity. They found that one area had a specialised and causal role in processing orientation while neural activity in the other underpinned the processing of shape defined by differences in curvature.

Professor Morland said: "Measuring activity across the brain with FMRI can't tell us what causal role different areas play in our . It is by disrupting brain function in specific areas that allows the causal role of that area to be assessed.

"Historically, neuropsychologists have found out a lot about the human brain by examining people who have had permanent disruption of certain parts of the brain because of injury to it. Unfortunately, brain damage seldom occurs at the spatial scale that allows the function of small neighbouring areas to be understood. Our approach is to temporarily disrupt brain activity by applying brief magnetic fields. When these fields are applied to one, small area of the brain, we find that orientation tasks are harder, while disrupting activity in this area's nearest neighbour only affected the ability to perceive shapes."

Dr McKeefry added: "The combination of modern brain scanning technology along with magnetic neuro-stimulation techniques provides us with a powerful means by which we can study the workings of the living human brain.

"The results that we report in this paper provide new insights into how the human brain embarks upon the complex task of analysing objects that we see in the world around us.

"Our work demonstrates how processing of different aspects of visual objects, such as and , occurs in different brain areas that lie side by side. The ultimate challenge will be to reveal how this information is combined across these and other areas and how it ultimately leads to object recognition."

Explore further: New study examines brain processes behind facial recognition

More information: "Specialized and independent processing of orientation and shape in visual field maps LO1 and LO2" Nature Neuroscience, DOI: 10.1038/nn.3327

Related Stories

Seeing Beyond the Visual Cortex

April 3, 2012

(Medical Xpress) -- It's a chilling thought--losing the sense of sight because of severe injury or damage to the brain's visual cortex. But, is it possible to train a damaged or injured brain to "see" again after such a catastrophic ...

Recommended for you

Skin stem cells used to generate new brain cells

April 25, 2017

Using human skin cells, University of California, Irvine neurobiologists and their colleagues have created a method to generate one of the principle cell types of the brain called microglia, which play a key role in preserving ...

How brains process facial expressions

April 25, 2017

Have you ever thought someone was angry at you, but it turned out you were just misreading their facial expression? Caltech researchers have now discovered that one specific region of the brain, called the amygdala, is involved ...


Adjust slider to filter visible comments by rank

Display comments: newest first

1 / 5 (3) Feb 03, 2013
A new study doesn't mean that the same discovery is a new discovery. This has been shown before.
not rated yet Feb 03, 2013
I suggest there are discrete stages in visual recognition. The first stage is Saccade integration, where small movements caused by perspective and paralax are used to maintian object focus by feedback to the eye muscles. The feedback loops created by causal experience in success or failure at this task is used for object identification.

For example if a cube is observed from the front and the top left corner focused on by the eye, there must be a mechanism to keep looking at the corner when the head moves or eyes wiggle or the cube moves. The pattern of eye corrections needed to keep focus on the corner while the cube turns and the unique distractions/focal points of the cube (other corners rotated into view or moved) are all the factors necessary for identifying an object as a simplified concept (a cube). Trying to directly comprehend a large unsimplified graphic is impossibly difficult compared to pattern recognition of saccade corrections to eye muscles.
not rated yet Feb 03, 2013
Are there any reported cases of studying people who have lost the use of their eye muscles needed for saccades? My guess is they'll show greatly diminished ability to recognize objects if their eye muscles are unable to move. They'll probably try to move their head uncontrollably when seeking to identify objects if their eyes are unable to move. I suspect they may not be able to distinguish even simple shapes such as a square vs a triangle vs a star without saccade integration.

Scale is important in Saccade integration so testing with a small square occupying 2% of the visual field might yield different results than a large square occupying 60% of the visual field. I think the same mechanism must work for smaller scale object recognition, but this could be tested by timing statistics for object recognition times for various scaled geometric shapes. You would need to test this in a dark room with single monitor. So no visual distractions like with eye exams.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.