Neurons in our brain do a remarkable job of translating sensory information into reliable representations of our world that are critical to effectively guide our behavior. The parts of the brain that are responsible for vision have long been center stage for scientists' efforts to understand the rules that neural circuits use to encode sensory information. Years of research have led to a fairly detailed picture of the initial steps of this visual process, carried out in the retina, and how information from this stage is transmitted to the visual part of the cerebral cortex, a thin sheet of neurons that forms the outer surface of the brain. We have also learned much about the way that neurons represent visual information in visual cortex, as well as how different this representation is from the information initially supplied by the retina. Scientists are now working to understand the set of rules—the neural blueprint— that explains how these representations of visual information in the visual cortex are constructed from the information provided by the retina. Using the latest functional imaging techniques, scientists at MPFI have recently discovered a surprisingly simple rule that explains how neural circuits combine information supplied by different types of cells in the retina to build a coherent, information-rich representation of our visual world.

Vision begins with the spatial pattern of light and dark that falls on the retinal surface. One important function performed by the in the is the preservation of the orderly spatial relationships of light versus dark that exist on the retinal surface. These neural circuits form an orderly map of visual space where each point on the surface of the cortex contains a column of that each respond to a small region of visual space— and adjacent columns respond to adjacent regions of visual space. But these cortical circuits do more than build a map of visual space: individual neurons within these columns each respond selectively to the specific of edges in their region of visual space; some neurons respond preferentially to vertical edges, some to horizontal edges, and others to angles in between. This property is also mapped in a columnar fashion where all neurons in a radial column have the same orientation preference, and adjacent columns prefer slightly different orientations.

Things would be easy if all the cortex had to do was build a map of visual space: a simple one to one mapping of points on the retinal surface to columns in the cortex would be all that was necessary. But building a map of orientation that coexists with the map of visual space is a much greater challenge. This is because the neurons of the retina do not distinguish orientation in the first step of vision. Instead, information on the orientation of edges must be constructed by neural circuits in the visual cortex. This is done using information supplied from two distinct types of retinal cells: those that respond to increases in light (ON-cells) and those that respond to decreases in light (OFF-cells). Adding to the complexity, orientation selectivity depends on having individual cortical neurons receive their ON and OFF signals from non-overlapping regions of visual space, and the spatial arrangement of these regions determines the orientation preference of the cell. Cortical neurons that prefer vertical edge orientations have ON and OFF responsive regions that are displaced horizontally in visual space, those that prefer horizontal edge orientations have their ON and OFF regions displaced vertically in visual space, and this systematic relationship holds for all other edge orientations.

So cortical circuits face a paradox: How do they take the spatial information from the retina and distort it to create an orderly map of orientation selectivity, while at the same time preserving fine retinal spatial information in order to generate an orderly map of visual space? Nature's solution might best be called 'divide and conquer'. By using imaging technologies that allow visualization of the ON and OFF response regions of hundreds of individual cortical neurons, Kuo-Sheng Lee and Sharon Huang in David Fitzpatrick's lab at MPFI have discovered that fine scale retinal spatial information is preserved by the OFF response regions of cortical neurons, while the ON response regions exhibit systematic spatial displacements that are necessary to build an orderly map of edge orientation. Preserving the detailed spatial information from the retina in the OFF response regions is consistent with evidence that dark elements of natural scenes convey more fine scale information than the light elements, and that OFF have properties that allow them to better extract this information. In addition, Lee et al. show that this OFF-anchored cortical architecture enables emergence of an additional orderly map of absolute spatial phase—a property that hasn't received much attention from neuroscientists, but computer vision research has shown contains a wealth of information about the visual scene that can be used to efficiently encode spatial patterns, motion, and depth.

While these are important new insights into how visual is transformed from retina to cortical representations, they pose a host of new questions about the network of synaptic connections that performs this transformation, and the developmental mechanisms that construct it, questions that the Fitzpatrick Lab continues to explore.

More information: Kuo-Sheng Lee et al, Topology of ON and OFF inputs in visual cortex enables an invariant columnar architecture, Nature (2016). DOI: 10.1038/nature17941

Journal information: Nature

Provided by Max Planck Florida Institute for Neuroscience