Sizing things up: The evolutionary neurobiology of scale invariance

February 28, 2013 by Stuart Mason Dambrot feature
Sizing things up: The evolutionary neurobiology of scale invariance
Organization of Jij by D⊥(i, j). The distances D⊥(i, j) within layer λ and between λ and λ’ are ranked after fixing i on layer λ. The site labeled 0 is the site with D⊥(i, j) = 0, the sites labeled 1 are the sites with D⊥(i, j) = 1, and the sites labeled 2 are the sites with D⊥(i, j) = Ö2, etc. The site i is chosen at the center for the sake of presentation. The interaction on the blue link contributes to J0 between layers λ and λ’, and the interactions on the red links contribute to J1 between these two layers. The rest of the links (not shown here) are obtained by varying the site i and repeating the procedure. Copyright © PNAS, doi:10.1073/pnas.1222618110

(Medical Xpress)—Visual perception is far more complex and powerful than our experience suggests. Moreover, in attempting to both understand vision and implement it in a computational device, the fact that a species' senses developed in concert with the ecological niche in which that species evolved. In our case, that means an evolutionary visual context consisting of natural objects, including mountains, rivers, trees, and other animals. Noting that neural representations of visual inputs are related to their statistical structure, and natural structures display an inseparable size hierarchy indicative of scale invariance, and scale invariance also occurs near a critical point in wide range of physical systems including ferromagnetic), researchers at the Salk Institute for Biological Studies and the University of California-San Diego recently demonstrated what their paper describes as "a unique approach to studying natural images by decomposing images into a hierarchy of layers at different logarithmic intensity scales and mapping them to a quasi-2D magnet."

Prof. Terrence J. Sejnowski describes the research he and Dr. Saeed Saremi conducted, starting with the challenges they faced. "The traditional way images are represented in vision is by an array of pixels with gray levels," Sejnowski tells Medical Xpress. "However, we know that is based on a log scale of luminance. The challenge was to find a new representation that would make the log levels explicit." In addition, Sejnowski points out that Dr. Saremi came up with the idea of using bit planes, later generalized to power in any integer base.

"Once we started looking at the bit planes of natural images," Sejnowski continues, "it became apparent that each layer looked like a 2D Ising model at different temperatures – that is, the high-order bits were cold and the low order- bits were hot." An Ising model is a mathematical model of ferromagnetism in , consisting of discrete variables that represent magnetic dipole moments of atomic spins that can be in one of two states (+1 or −1). Taken together, Sejnowski explains, these bit planes represent a 3D quasimagnet with interesting properties.

Understanding retinal encoding – and possibly obtaining further insight into how the neocortex represents scale invariance – requires, in turn, an understanding the statistical structure in natural image hierarchies. Moreover, the brain is not a passive image receptor, but rather actively generates sensory models derived from sensory experience. Since the so-called Bolzmann machine (spin glasses with arbitrary connectivity running at a finite temperature, generalizing Hopfield nets, which run at zero temperature) can represent image statistical structure, Sejnowski and Saremi developed a unique approach in which certain aspects of the Boltzmann machine's input representations are learned from natural images

Scrambled natural images. The scrambled image (Upper) and the layers 4, 5, and 6 used for its construction are shown. Layer 6 is taken from the example of Fig. 1, and other layers are taken randomly from the binary decomposition of different images in the database. Layers 1:3 and 7:15 are not shown because of space; altogether, they contain only 5% of the information in this example. Copyright © PNAS, doi:10.1073/pnas.1222618110

"Geoffrey Hinton and I introduced the Boltzmann machine in the 1980s as a model for multilayer neural networks," notes Sejnowski. "We showed that there is a remarkably simple learning algorithm that finds the connection weights for a network that could represent the probability distribution for an ensemble of inputs." When Sejnowski and Saremi applied Boltzmann machine learning to natural images as inputs, they found positive pairwise connections that fell off with distance on each layer, much like the 2D Ising model for a ferromagnet, and negative pairwise weights between the layers, representing antiferromagnetic interactions.

The theory of second-order phase transitions was key to understanding the significance of what the scientists had found, Sejnowski says. "There were 15 bit planes (corresponding to pixels with 15 bit integers), each corresponding to a different temperature. We were astonished to find that there was a phase transition at bit plane 6 with the same critical exponent as the 2D Ising model."

"Scale invariance had been observed in natural images for decades based on the power law drop-off in power as a function of spatial scale," Sejnowski explains. "At a phase transition, the spatial correlation length becomes infinite and there is a critical slowing." This suggests, he adds, that the reason there is structure at every spatial scale in the natural world is because nature is, in some sense, sitting at a phase transition between order and disorder.

In terms of the evolution and neurobiology of perceptual invariants, Sejnowski notes that biological systems that have evolved to survive in this world may take advantage of this structure, and in particular the organization of the visual system may reflect those statistics – and most of the information in natural images is captures in 3 bit planes, which may be why photoreceptors are linear over a single order of magnitude. Finally, he adds, adaptation mechanisms in the retina shift the linear region over 10 orders of magnitude in luminance.

Moving forward, Sejnowski says, "We've trained the Boltzmann machine on only the connections between pixels in the "visible" input layer. "The next step is to use this as the input layer in a hierarchy of "hidden" layers, such as that found in our visual systems, which is around 12 layers deep." He adds that since the 1980s, there have been great advances in computer power and algorithms that now allow Boltzmann machines to be trained in deep networks.

In the longer term, Sejnowski concludes, this new input representation may benefit computer vision.

Explore further: Thermodynamics of visual images may help us see the world

More information: Hierarchical model of natural images and the origin of scale invariance, PNAS February 19, 2013 vol. 110 no. 8 3071-3076, doi:10.1073/pnas.1222618110

Related Stories

Thermodynamics of visual images may help us see the world

February 13, 2013

(—Although researchers know that a large portion of the brain is devoted to visual processing, exactly how we interpret the complex patterns within natural scenes is far from understood. One question scientists ...

Darkness sheds light on neural computations

September 8, 2011

In order to make sense of its environment, the brain forms and maintains an internal model of the external world. A study published in the journal Science shows that neural activity recorded in darkness, uncovers hallmarks ...

Recommended for you

Why the lights don't dim when we blink

January 19, 2017

Every few seconds, our eyelids automatically shutter and our eyeballs roll back in their sockets. So why doesn't blinking plunge us into intermittent darkness and light? New research led by the University of California, Berkeley, ...

Curb your immune enthusiasm

January 19, 2017

Normally when we think of viruses, from the common cold to HIV, we want to boost people's immunity to fight them. But for scientists who develop therapeutic viruses (to, for example, target cancer cells or correct gene deficiencies) ...


Adjust slider to filter visible comments by rank

Display comments: newest first

1 / 5 (1) Feb 28, 2013
perceptual invariants

All depends on an event one's labels independent.
Probablity's existence proof is math, not nature.
1 / 5 (1) Feb 28, 2013

Cosmologists Albrecht and Phillips argue classical probability as a special case of quantum mechanics - "fractally complex" nature (Benoit Mandelbrot).
1 / 5 (1) Feb 28, 2013
You can constraint imagination. Those are accessible to language.
1 / 5 (1) Feb 28, 2013
math, not nature.

One disentangles math from nature and nature from math how again?
2 / 5 (4) Mar 01, 2013
If Occam were to read the article, he'd spin in his grave.

Scale invariance in perception exists because we can approach an object, and the object has to "look the same". The way this article is written, these guys make it way too complex. Also, I wouldn't take it for granted that their model represents actual visual perception in animals. Or humans.

However, their methods could actually be useful in machine vision, such as face recognition or other specific tasks.
2.8 / 5 (4) Mar 01, 2013
Ah, so you're not "Tausch" with doxastic commitment. LOL. Mandelbrot's fractal sets are typically self-similar patterns, where self-similar means they are "the same from near as from far (Jean-François Gouyet, 1996)" Sounds like scale invariance to me.
3 / 5 (2) Mar 01, 2013
Mandan asked "One disentangles math from nature and nature from math how again?"

One is a map, and the other is the territory. E.g. try to find "twenty-three" in nature. Not twenty-three of something; find the number twenty-three. ... Correct, The number twenty-three has no existence in the real world. Neither does any part or portion of mathematics. Maths is useful, but it's a map, not a set of laws that the real world is somehow constrained to follow.
1 / 5 (1) Mar 01, 2013
Ah, so you're not "Tausch" with doxastic commitment. LOL. -DB

You committed yourself to consistency.
That makes no sense.
You only need one innocuous inconsistency to circumvent incompleteness.
With Pattern chaser I agree.
not rated yet Mar 01, 2013
The inconsistency consists of the choice ZF or ZFC.
not rated yet Mar 01, 2013
Maths is useful, but it's a map, not a set of laws that the real world is somehow constrained to follow.

And where did I say it was a set of laws or anything about constraints? I simply asked a question.

It seems fairly obvious that nothing exists outside the "real world", aka Universe or Universes. Whether our senses-- or more importantly our ape minds-- are capable of detecting or understanding what the "real world" is, let alone what it can or cannot contain or is or is not constrained by is another question entirely. You seem very certain of your certainty, as does Tausch.

In the meantime, let's not get into any ranking wars, shall we?
1 / 5 (1) Mar 02, 2013
Talking about scale invariance and no mention of the Mellin transform? Kinda misses the mark, don't ya think?
not rated yet Mar 03, 2013
Pattern chaser is Tausch.

Do i win £5?
not rated yet Mar 04, 2013
I would have guessed the scale invariance is a misdiagnosis of a topological transformation. A cube for example has the same topology, or number of connections between points, at any scale. Also, it's easier to identify the cube by first taking the mathematical derivative between neurons or pixels. sudden change in color usually means an edge. So a sudden change in color or light is easily detected as an edge. the fact that we dream usually in black and white indicates color is an attribute easily and early discarded in visual processing.
not rated yet Mar 04, 2013
Talking about scale invariance and no mention of the Mellin transform? Kinda misses the mark, don't ya think?-SC

Doesn't matter.
Connection weights aren't independent events.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.