Device converting images into music helps individuals without vision reach for objects in space

July 5, 2012
Left: This is an illustration of the EyeMusic SSD, showing a user with a camera mounted on the glasses, and scalp headphones, hearing musical notes that create a mental image of the visual scene in front of him. He is reaching for the red apple in a pile of green ones. Top right: This is a close-up of the glasses-mounted camera and headphones; Bottom right: This is the hand-held camera pointed at the object of interest. Credit: Maxim Dupliy, Amir Amedi and Shelly Levy-Tzedek

Sensory substitution devices (SSDs) use sound or touch to help the visually impaired perceive the visual scene surrounding them. The ideal SSD would assist not only in sensing the environment but also in performing daily activities based on this input. For example, accurately reaching for a coffee cup, or shaking a friend's hand. In a new study, scientists trained blindfolded sighted participants to perform fast and accurate movements using a new SSD, called EyeMusic. Their results are published in the July issue of Restorative Neurology and Neuroscience.

The EyeMusic, developed by a team of researchers at the Hebrew University of Jerusalem, employs pleasant musical tones and scales to help the visually impaired "see" using music. This non-invasive converts images into a combination of musical notes, or "soundscapes."

The device was developed by the senior author Prof. Amir Amedi and his team at the Edmond and Lily Safra Center for (ELSC) and the Institute for Israel-Canada at the Hebrew University. The EyeMusic scans an image and represents pixels at high vertical locations as high-pitched musical notes and low vertical locations as low-pitched notes according to a musical scale that will sound pleasant in many possible combinations. The image is scanned continuously, from left to right, and an auditory cue is used to mark the start of the scan. The horizontal location of a pixel is indicated by the timing of the musical notes relative to the cue (the later it is sounded after the cue, the farther it is to the right), and the brightness is encoded by the loudness of the sound.

The EyeMusic's algorithm uses different musical instruments for each of the five colors: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), yellow (); Black is represented by silence. Prof. Amedi mentions that "The notes played span five octaves and were carefully chosen by musicians to create a pleasant experience for the users." Sample sound recordings are available at http://brain.huji.ac.il/em/.

"We demonstrated in this study that the EyeMusic, which employs pleasant musical scales to convey visual information, can be used after a short training period (in some cases, less than half an hour) to guide movements, similar to movements guided visually," explain lead investigators Drs. Shelly Levy-Tzedek, an ELSC researcher at the Faculty of Medicine, Hebrew University, Jerusalem, and Prof. Amir Amedi. "The level of accuracy reached in our study indicates that performing daily tasks with an SSD is feasible, and indicates a potential for rehabilitative use."

The study tested the ability of 18 blindfolded sighted individuals to perform movements guided by the EyeMusic, and compared those movements to those performed with visual guidance. At first, the blindfolded participants underwent a short familiarization session, where they learned to identify the location of a single object (a white square) or of two adjacent objects (a white and a blue square).

In the test sessions, participants used a stylus on a digitizing tablet to point to a white square located either in the north, the south, the east or the west. In one block of trials they were blindfolded (SSD block), and in the other block (VIS block) the arm was placed under an opaque cover, so they could see the screen but did not have direct visual feedback from the hand. The endpoint location of their hand was marked by a blue square. In the SSD block, they received feedback via the EyeMusic. In the VIS block, the feedback was visual.

"Participants were able to use auditory information to create a relatively precise spatial representation," notes Dr. Levy-Tzedek.

The study lends support to the hypothesis that representation of space in the brain may not be dependent on the modality with which the spatial information is received, and that very little training is required to create a representation of space without vision, using sounds to guide fast and accurate movements. "SSDs may have great potential to provide detailed spatial information for the visually impaired, allowing them to interact with their external environment and successfully make movements based on this information, but further research is now required to evaluate the use of our device in the blind " concludes Dr. Levy-Tzedek. These results demonstrate the potential application of the EyeMusic in performing everyday tasks – from accurately reaching for the red (but not the green!) apples in the produce aisle, to, perhaps one day, playing a Kinect / Xbox game.

Explore further: How blind can 'read' shown in new research

More information: “Fast, Accurate Reaching Movements with a Visual-to-Auditory Sensory Substitution Device,” by S. Levy-Tzedek, S. Hanassy, S. Abboud, S. Maidenbaum, A. Amedi. Restorative Neurology and Neuroscience, 30: 4 (July 2012). DOI: 10.3233/RNN-2012-110219

Related Stories

How blind can 'read' shown in new research

May 16, 2012
A method developed at the Hebrew University of Jerusalem for training blind persons to "see" through the use of a sensory substitution device (SSD) has enabled those using the system to actually "read" an eye chart with letter ...

Study shows vision is necessary for spatial awareness tasks

March 21, 2012
(Medical Xpress) -- People who lose their sight at a later stage in life have a greater spatial awareness than if they were born blind, according to scientists at Queen Mary, University of London.

Recommended for you

Activating MSc glutamatergic neurons found to cause mice to eat less

December 13, 2017
(Medical Xpress)—A trio of researchers working at the State University of New York has found that artificially stimulating neurons that exist in the medial septal complex in mouse brains caused test mice to eat less. In ...

Scientists discover blood sample detection method for multiple sclerosis

December 13, 2017
A method for quickly detecting signs of multiple sclerosis has been developed by a University of Huddersfield research team.

LLNL-developed microelectrodes enable automated sorting of neural signals

December 13, 2017
Thin-film microelectrode arrays produced at Lawrence Livermore National Laboratory (LLNL) have enabled development of an automated system to sort brain activity by individual neurons, a technology that could open the door ...

Discovery deepens understanding of brain's sensory circuitry

December 12, 2017
Because they provide an exemplary physiological model of how the mammalian brain receives sensory information, neural structures called "mouse whisker barrels" have been the subject of study by neuroscientists around the ...

Intermittent fasting found to increase cognitive functions in mice

December 12, 2017
(Medical Xpress)—The Daily Mail spoke with the leader of a team of researchers with the National Institute on Aging in the U.S. and reports that they have found that putting mice on a diet consisting of eating nothing every ...

Neuroscientists show deep brain waves occur more often during navigation and memory formation

December 12, 2017
UCLA neuroscientists are the first to show that rhythmic waves in the brain called theta oscillations happen more often when someone is navigating an unfamiliar environment, and that the more quickly a person moves, the more ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

Tesla2
not rated yet Jul 05, 2012
The vOICe has been out for some time, and does something similar, but I think more accurately.
http://www.seeing...und.com/

I understand the incentive to create a more pleasant sound for people to listen to, and ease of picking this up. The vOICe is not pleasant to listen to, but the authors of this paper are also adding significant harmonic content to the image. I think if you were to back-convert the sound back to an image, using the simple rules of pitch, volume and time, you would end up with "shadows" of the image all over the place. I think this significantly limits the effective resolution that is available with this method. vOICe may be harder to learn, and unpleasant to listen to, but I think it has more accuracy potential.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.