Despite what you may think, your brain is a mathematical genius

April 11, 2013
From left, scientists Sergei Gepshtein and Thomas D. Albright, Salk Institute. Credit: Salk Institute for Biological Studies

The irony of getting away to a remote place is you usually have to fight traffic to get there. After hours of dodging dangerous drivers, you finally arrive at that quiet mountain retreat, stare at the gentle waters of a pristine lake, and congratulate your tired self on having "turned off your brain."

"Actually, you've just given your a whole new challenge," says Thomas D. Albright, director of the Vision Center Laboratory at of the Salk Institute and an expert on how the works. "You may think you're resting, but your brain is automatically assessing the spatio-temporal properties of this novel environment-what objects are in it, are they moving, and if so, how fast are they moving?

The dilemma is that our brains can only dedicate so many neurons to this assessment, says Sergei Gepshtein, a staff scientist in Salk's Vision Center Laboratory. "It's a problem in economy of resources: If the visual system has limited resources, how can it use them most efficiently?"

Albright, Gepshtein and Luis A. Lesmes, a specialist in measuring human performance, a former Salk Institute post-doctoral researcher, now at the Schepens Eye Research Institute, proposed an answer to the question in a recent issue of Proceedings of the National Academy of Sciences. It may reconcile the puzzling contradictions in many previous studies.

Previously, scientists expected that extended exposure to a novel environment would make you better at detecting its subtle details, such as the slow motion of waves on that lake. Yet those who tried to confirm that idea were surprised when their experiments produced contradictory results. "Sometimes people got better at detecting a stimulus, sometimes they got worse, sometimes there was no effect at all, and sometimes people got better, but not for the expected stimulus," says Albright, holder of Salk's Conrad T. Prebys Chair in Vision Research.

The answer, according to Gepshtein, came from asking a new question: What happens when you look at the problem of resource allocation from a system's perspective?

It turns out something's got to give.

"It's as if the brain's on a budget; if it devotes 70 percent here, then it can only devote 30 percent there," says Gepshtein. "When the adaptation happens, if now you're attuned to high speeds, you'll be able to see faster moving things that you couldn't see before, but as a result of allocating resources to that stimulus, you lose sensitivity to other things, which may or may not be familiar."

Summing up, Albright says, "Simply put, it's a tradeoff: The price of getting better at one thing is getting worse at another."

Gepshtein, a computational neuroscientist, analyzes the brain from a theoretician's point of view, and the PNAS paper details the computations the visual system uses to accomplish the adaptation. The computations are similar to the method of signal processing known as Gabor transform, which is used to extract features in both the spatial and temporal domains.

Yes, while you may struggle to balance your checkbook, it turns out your brain is using operations it took a Nobel Laureate to describe. Dennis Gabor won the 1971 Nobel Prize in Physics for his invention and development of holography. But that wasn't his only accomplishment. Like his contemporary Claude Shannon, he worked on some of the most fundamental questions in communications theory, such as how a great deal of information can be compressed into narrow channels.

"Gabor proved that measurements of two fundamental properties of a signal-its location and frequency content-are not independent of one another," says Gepshtein.

The location of a signal is simply that: where is the signal at what point in time. The content-the "what" of a signal-is "written" in the language of frequencies and is a measurement of the amount of variation, such as the different shades of gray in a photograph.

The challenge comes when you're trying to measure both location and frequency, because location is more accurately determined in a short time window, while variation needs a longer time window (imagine how much more accurately you can guess a song the longer it plays).

The obvious answer is that you're stuck with a compromise: You can get a precise measurement of one or the other, but not both. But how can you be sure you've come up with the best possible compromise? Gabor's answer was what's become known as a "Gabor Filter" that helps obtain the most precise measurements possible for both qualities. Our brains employ a similar strategy, says Gepshtein.

"In human vision, stimuli are first encoded by neural cells whose response characteristics, called receptive fields, have different sizes," he explains. "The neural cells that have larger receptive fields are sensitive to lower spatial frequencies than the cells that have smaller receptive fields. For this reason, the operations performed by biological vision can be described by a Gabor wavelet transform."

In essence, the first stages of the visual process act like a filter. "It describes which stimuli get in, and which do not," Gepshtein says. "When you change the environment, the filter changes, so certain stimuli, which were invisible before, become visible, but because you moved the filter, other stimuli, which you may have detected before, no longer get in."

"When you see only small parts of this filter, you find that visual sensitivity sometimes gets better and sometimes worse, creating an apparently paradoxical picture," Gepshtein continues. "But when you see the entire filter, you discover that the pieces - the gains and losses - add up to a coherent pattern."

From a psychological point of view, according to Albright, what makes this especially intriguing is that the assessing and adapting is happening automatically-all of this processing happens whether or not you consciously 'pay attention' to the change in scene.

Yet, while the adaptation happens automatically, it does not appear to happen instantaneously. Their current experiments take approximately thirty minutes to conduct, but the scientists believe the adaption may take less time in nature.

Explore further: The visual system as economist: Neural resource allocation in visual adaptation

Related Stories

Darkness sheds light on neural computations

September 8, 2011

In order to make sense of its environment, the brain forms and maintains an internal model of the external world. A study published in the journal Science shows that neural activity recorded in darkness, uncovers hallmarks ...

Recommended for you

Autism-linked protein crucial for feeling pain

December 1, 2016

Sensory problems are common to autism spectrum disorders. Some individuals with autism may injure themselves repetitively—for example, pulling their hair or banging their heads—because they're less sensitive to pain than ...

Study provides neuronal mechanism for the benefits of fasting

December 1, 2016

A study from the Buck Institute offers for the first time an explanation for the benefits of fasting at the neuronal level, providing a possible mechanism for how fasting can afford health benefits. Publishing on December ...


Adjust slider to filter visible comments by rank

Display comments: newest first

3 / 5 (2) Apr 11, 2013
I always wondered why I was not a Good Will Hunting or an Eddie Mora. Now I have an understanding.
Sounds like a Heisenberg Uncertainty principle for the brain. instead of momentum and position we have location and frequency.
not rated yet Apr 13, 2013
From the base to the apex, the perilymph and endolymph fluid of the Cochlea are (among many) receptive fields for sound. Fluid mixture causes hearing impairment. Unlike vision, hearing is phase sensitive taking advantage of the Gabor wavelet tranform phase content of sound.
The phase content of sound is independent of energy and mass. Phase shift is independent of location and frequency. The auditory tonotopy is energy and mass dependent - dependent on frequency and location limiting their threshold. Under this threshold is where phase content is decisive in further decoding information carried by the sound wave content.
Vision can not discern phase content. Which means vision is at a disadvantage - as stated - to determine precisely both frequency and location simultaneously.
The Salk Institute has contributed significantly to vision research.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.