Clear vision despite a heavy head: Model explains the choice of simple movements

November 9, 2011

The brain likes stereotypes - at least for movements. Simple actions are most often performed in the same manner. A mathematical model explains why this is the case and could be used to generate more natural robot movements and to adapt prosthetic movements.

In one respect, handling a is just like looking in the rearview mirror: well established movements help the brain to concentrate on the essentials. But just a simple gaze shift to a new target bears the possibility of an almost of combinations of eye and head movement: how fast do we move eye and head? How much does the eye rotate, how much the head? Until now, it was unclear why the brain chooses a particular movement option from the set of all possible combinations.

A team led by Dr. Stefan Glasauer (LMU Munich, Germany), project leader at the Bernstein Center Munich, has now developed a that accurately predicts horizontal gaze movements. Besides eye and head contribution to the gaze shift it also predicts movement duration and velocity.

In contrast to most previous models, the researchers considered the movement of head and eye to the target as well as the counter-movement of the eye after the has reached the target, but the head is still moving. "The longer the movement, the more add up," says Glasauer. "However, the faster the movement, the more errors arise from acceleration and large muscle forces." On the basis of this information, the Munich researchers calculated eye and head movements and determined the movement combination that caused the fewest disturbances. This movement matched that chosen by healthy volunteers not only in natural conditions but also in an experiment where subjects' head movements were altered by an experimental increase in the head's rotational inertia (see picture).

These findings could help teach robots humanoid movements and thus facilitate interaction with service robots. It may also be helpful in the construction of "smart" prostheses. These devices could offer the carrier a choice of movements that come closest to the natural human ones. For the next step, Glasauer and colleagues want to examine three-dimensional eye-head movements and aim to better understand simple movement learning.

More information: Saglam M., Lehnen N., Glasauer S. (2011): Optimal control of natural eye-head movements minimizes the impact of noise. J Neurosci. 31(45):16185

Related Stories

Recommended for you

Neuro chip records brain cell activity

October 26, 2016

Brain functions are controlled by millions of brain cells. However, in order to understand how the brain controls functions, such as simple reflexes or learning and memory, we must be able to record the activity of large ...

Can a brain-computer interface convert your thoughts to text?

October 25, 2016

Ever wonder what it would be like if a device could decode your thoughts into actual speech or written words? While this might enhance the capabilities of already existing speech interfaces with devices, it could be a potential ...

The current state of psychobiotics

October 25, 2016

Now that we know that gut bacteria can speak to the brain—in ways that affect our mood, our appetite, and even our circadian rhythms—the next challenge for scientists is to control this communication. The science of psychobiotics, ...

After blindness, the adult brain can learn to see again

October 25, 2016

More than 40 million people worldwide are blind, and many of them reach this condition after many years of slow and progressive retinal degeneration. The development of sophisticated prostheses or new light-responsive elements, ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.