Despite what you may think, your brain is a mathematical genius

April 11, 2013, Salk Institute
From left, scientists Sergei Gepshtein and Thomas D. Albright, Salk Institute. Credit: Salk Institute for Biological Studies

The irony of getting away to a remote place is you usually have to fight traffic to get there. After hours of dodging dangerous drivers, you finally arrive at that quiet mountain retreat, stare at the gentle waters of a pristine lake, and congratulate your tired self on having "turned off your brain."

"Actually, you've just given your a whole new challenge," says Thomas D. Albright, director of the Vision Center Laboratory at of the Salk Institute and an expert on how the works. "You may think you're resting, but your brain is automatically assessing the spatio-temporal properties of this novel environment-what objects are in it, are they moving, and if so, how fast are they moving?

The dilemma is that our brains can only dedicate so many neurons to this assessment, says Sergei Gepshtein, a staff scientist in Salk's Vision Center Laboratory. "It's a problem in economy of resources: If the visual system has limited resources, how can it use them most efficiently?"

Albright, Gepshtein and Luis A. Lesmes, a specialist in measuring human performance, a former Salk Institute post-doctoral researcher, now at the Schepens Eye Research Institute, proposed an answer to the question in a recent issue of Proceedings of the National Academy of Sciences. It may reconcile the puzzling contradictions in many previous studies.

Previously, scientists expected that extended exposure to a novel environment would make you better at detecting its subtle details, such as the slow motion of waves on that lake. Yet those who tried to confirm that idea were surprised when their experiments produced contradictory results. "Sometimes people got better at detecting a stimulus, sometimes they got worse, sometimes there was no effect at all, and sometimes people got better, but not for the expected stimulus," says Albright, holder of Salk's Conrad T. Prebys Chair in Vision Research.

The answer, according to Gepshtein, came from asking a new question: What happens when you look at the problem of resource allocation from a system's perspective?

It turns out something's got to give.

"It's as if the brain's on a budget; if it devotes 70 percent here, then it can only devote 30 percent there," says Gepshtein. "When the adaptation happens, if now you're attuned to high speeds, you'll be able to see faster moving things that you couldn't see before, but as a result of allocating resources to that stimulus, you lose sensitivity to other things, which may or may not be familiar."

Summing up, Albright says, "Simply put, it's a tradeoff: The price of getting better at one thing is getting worse at another."

Gepshtein, a computational neuroscientist, analyzes the brain from a theoretician's point of view, and the PNAS paper details the computations the visual system uses to accomplish the adaptation. The computations are similar to the method of signal processing known as Gabor transform, which is used to extract features in both the spatial and temporal domains.

Yes, while you may struggle to balance your checkbook, it turns out your brain is using operations it took a Nobel Laureate to describe. Dennis Gabor won the 1971 Nobel Prize in Physics for his invention and development of holography. But that wasn't his only accomplishment. Like his contemporary Claude Shannon, he worked on some of the most fundamental questions in communications theory, such as how a great deal of information can be compressed into narrow channels.

"Gabor proved that measurements of two fundamental properties of a signal-its location and frequency content-are not independent of one another," says Gepshtein.

The location of a signal is simply that: where is the signal at what point in time. The content-the "what" of a signal-is "written" in the language of frequencies and is a measurement of the amount of variation, such as the different shades of gray in a photograph.

The challenge comes when you're trying to measure both location and frequency, because location is more accurately determined in a short time window, while variation needs a longer time window (imagine how much more accurately you can guess a song the longer it plays).

The obvious answer is that you're stuck with a compromise: You can get a precise measurement of one or the other, but not both. But how can you be sure you've come up with the best possible compromise? Gabor's answer was what's become known as a "Gabor Filter" that helps obtain the most precise measurements possible for both qualities. Our brains employ a similar strategy, says Gepshtein.

"In human vision, stimuli are first encoded by neural cells whose response characteristics, called receptive fields, have different sizes," he explains. "The neural cells that have larger receptive fields are sensitive to lower spatial frequencies than the cells that have smaller receptive fields. For this reason, the operations performed by biological vision can be described by a Gabor wavelet transform."

In essence, the first stages of the visual process act like a filter. "It describes which stimuli get in, and which do not," Gepshtein says. "When you change the environment, the filter changes, so certain stimuli, which were invisible before, become visible, but because you moved the filter, other stimuli, which you may have detected before, no longer get in."

"When you see only small parts of this filter, you find that visual sensitivity sometimes gets better and sometimes worse, creating an apparently paradoxical picture," Gepshtein continues. "But when you see the entire filter, you discover that the pieces - the gains and losses - add up to a coherent pattern."

From a psychological point of view, according to Albright, what makes this especially intriguing is that the assessing and adapting is happening automatically-all of this processing happens whether or not you consciously 'pay attention' to the change in scene.

Yet, while the adaptation happens automatically, it does not appear to happen instantaneously. Their current experiments take approximately thirty minutes to conduct, but the scientists believe the adaption may take less time in nature.

Explore further: The visual system as economist: Neural resource allocation in visual adaptation

Related Stories

The visual system as economist: Neural resource allocation in visual adaptation

March 30, 2013
(Medical Xpress)—It has long been held that in a new environment, visual adaptation should improve visual performance. However, evidence has contradicted this expectation: Adaptation sometimes not only decreases sensitivity ...

Darkness sheds light on neural computations

September 8, 2011
In order to make sense of its environment, the brain forms and maintains an internal model of the external world. A study published in the journal Science shows that neural activity recorded in darkness, uncovers hallmarks ...

Recommended for you

How your brain remembers what you had for dinner last night

January 17, 2018
Confirming earlier computational models, researchers at University of California San Diego and UC San Diego School of Medicine, with colleagues in Arizona and Louisiana, report that episodic memories are encoded in the hippocampus ...

Recording a thought's fleeting trip through the brain

January 17, 2018
University of California, Berkeley neuroscientists have tracked the progress of a thought through the brain, showing clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response ...

Midbrain 'start neurons' control whether we walk or run

January 17, 2018
Locomotion comprises the most fundamental movements we perform. It is a complex sequence from initiating the first step, to stopping when we reach our goal. At the same time, locomotion is executed at different speeds to ...

A 'touching sight': How babies' brains process touch builds foundations for learning

January 16, 2018
Touch is the first of the five senses to develop, yet scientists know far less about the baby's brain response to touch than to, say, the sight of mom's face, or the sound of her voice.

Brain zaps may help curb tics of Tourette syndrome

January 16, 2018
Electric zaps can help rewire the brains of Tourette syndrome patients, effectively reducing their uncontrollable vocal and motor tics, a new study shows.

Researchers identify protein involved in cocaine addiction

January 16, 2018
Mount Sinai researchers have identified a protein produced by the immune system—granulocyte-colony stimulating factor (G-CSF)—that could be responsible for the development of cocaine addiction.


Adjust slider to filter visible comments by rank

Display comments: newest first

3 / 5 (2) Apr 11, 2013
I always wondered why I was not a Good Will Hunting or an Eddie Mora. Now I have an understanding.
Sounds like a Heisenberg Uncertainty principle for the brain. instead of momentum and position we have location and frequency.
not rated yet Apr 13, 2013
From the base to the apex, the perilymph and endolymph fluid of the Cochlea are (among many) receptive fields for sound. Fluid mixture causes hearing impairment. Unlike vision, hearing is phase sensitive taking advantage of the Gabor wavelet tranform phase content of sound.
The phase content of sound is independent of energy and mass. Phase shift is independent of location and frequency. The auditory tonotopy is energy and mass dependent - dependent on frequency and location limiting their threshold. Under this threshold is where phase content is decisive in further decoding information carried by the sound wave content.
Vision can not discern phase content. Which means vision is at a disadvantage - as stated - to determine precisely both frequency and location simultaneously.
The Salk Institute has contributed significantly to vision research.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.