Device converting images into music helps individuals without vision reach for objects in space

Left: This is an illustration of the EyeMusic SSD, showing a user with a camera mounted on the glasses, and scalp headphones, hearing musical notes that create a mental image of the visual scene in front of him. He is reaching for the red apple in a pile of green ones. Top right: This is a close-up of the glasses-mounted camera and headphones; Bottom right: This is the hand-held camera pointed at the object of interest. Credit: Maxim Dupliy, Amir Amedi and Shelly Levy-Tzedek

Sensory substitution devices (SSDs) use sound or touch to help the visually impaired perceive the visual scene surrounding them. The ideal SSD would assist not only in sensing the environment but also in performing daily activities based on this input. For example, accurately reaching for a coffee cup, or shaking a friend's hand. In a new study, scientists trained blindfolded sighted participants to perform fast and accurate movements using a new SSD, called EyeMusic. Their results are published in the July issue of Restorative Neurology and Neuroscience.

The EyeMusic, developed by a team of researchers at the Hebrew University of Jerusalem, employs pleasant musical tones and scales to help the visually impaired "see" using music. This non-invasive converts images into a combination of musical notes, or "soundscapes."

The device was developed by the senior author Prof. Amir Amedi and his team at the Edmond and Lily Safra Center for (ELSC) and the Institute for Israel-Canada at the Hebrew University. The EyeMusic scans an image and represents pixels at high vertical locations as high-pitched musical notes and low vertical locations as low-pitched notes according to a musical scale that will sound pleasant in many possible combinations. The image is scanned continuously, from left to right, and an auditory cue is used to mark the start of the scan. The horizontal location of a pixel is indicated by the timing of the musical notes relative to the cue (the later it is sounded after the cue, the farther it is to the right), and the brightness is encoded by the loudness of the sound.

The EyeMusic's algorithm uses different musical instruments for each of the five colors: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), yellow (); Black is represented by silence. Prof. Amedi mentions that "The notes played span five octaves and were carefully chosen by musicians to create a pleasant experience for the users." Sample sound recordings are available at http://brain.huji.ac.il/em/.

"We demonstrated in this study that the EyeMusic, which employs pleasant musical scales to convey visual information, can be used after a short training period (in some cases, less than half an hour) to guide movements, similar to movements guided visually," explain lead investigators Drs. Shelly Levy-Tzedek, an ELSC researcher at the Faculty of Medicine, Hebrew University, Jerusalem, and Prof. Amir Amedi. "The level of accuracy reached in our study indicates that performing daily tasks with an SSD is feasible, and indicates a potential for rehabilitative use."

The study tested the ability of 18 blindfolded sighted individuals to perform movements guided by the EyeMusic, and compared those movements to those performed with visual guidance. At first, the blindfolded participants underwent a short familiarization session, where they learned to identify the location of a single object (a white square) or of two adjacent objects (a white and a blue square).

In the test sessions, participants used a stylus on a digitizing tablet to point to a white square located either in the north, the south, the east or the west. In one block of trials they were blindfolded (SSD block), and in the other block (VIS block) the arm was placed under an opaque cover, so they could see the screen but did not have direct visual feedback from the hand. The endpoint location of their hand was marked by a blue square. In the SSD block, they received feedback via the EyeMusic. In the VIS block, the feedback was visual.

"Participants were able to use auditory information to create a relatively precise spatial representation," notes Dr. Levy-Tzedek.

The study lends support to the hypothesis that representation of space in the brain may not be dependent on the modality with which the spatial information is received, and that very little training is required to create a representation of space without vision, using sounds to guide fast and accurate movements. "SSDs may have great potential to provide detailed spatial information for the visually impaired, allowing them to interact with their external environment and successfully make movements based on this information, but further research is now required to evaluate the use of our device in the blind " concludes Dr. Levy-Tzedek. These results demonstrate the potential application of the EyeMusic in performing everyday tasks – from accurately reaching for the red (but not the green!) apples in the produce aisle, to, perhaps one day, playing a Kinect / Xbox game.

More information: “Fast, Accurate Reaching Movements with a Visual-to-Auditory Sensory Substitution Device,” by S. Levy-Tzedek, S. Hanassy, S. Abboud, S. Maidenbaum, A. Amedi. Restorative Neurology and Neuroscience, 30: 4 (July 2012). DOI: 10.3233/RNN-2012-110219

Related Stories

How blind can 'read' shown in new research

May 16, 2012

A method developed at the Hebrew University of Jerusalem for training blind persons to "see" through the use of a sensory substitution device (SSD) has enabled those using the system to actually "read" an ...

See no shape, touch no shape, hear a shape?

Oct 18, 2010

(PhysOrg.com) -- Scientists at The Montreal Neurological Institute and Hospital – The Neuro, McGill University have discovered that our brains have the ability to determine the shape of an object simply by processing ...

Sensitivity of brain center for 'sound space' defined

Sep 20, 2007

While the visual regions of the brain have been intensively mapped, many important regions for auditory processing remain “uncharted territory.” Now, researchers at the Hebrew University of Jerusalem and ...

Recommended for you

Children with autism have extra synapses in brain

18 hours ago

Children and adolescents with autism have a surplus of synapses in the brain, and this excess is due to a slowdown in a normal brain "pruning" process during development, according to a study by neuroscientists ...

Learning to play the piano? Sleep on it!

20 hours ago

According to researchers at the University of Montreal, the regions of the brain below the cortex play an important role as we train our bodies' movements and, critically, they interact more effectively after ...

User comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Tesla2
not rated yet Jul 05, 2012
The vOICe has been out for some time, and does something similar, but I think more accurately.
http://www.seeing...und.com/

I understand the incentive to create a more pleasant sound for people to listen to, and ease of picking this up. The vOICe is not pleasant to listen to, but the authors of this paper are also adding significant harmonic content to the image. I think if you were to back-convert the sound back to an image, using the simple rules of pitch, volume and time, you would end up with "shadows" of the image all over the place. I think this significantly limits the effective resolution that is available with this method. vOICe may be harder to learn, and unpleasant to listen to, but I think it has more accuracy potential.