People make music through the use of instruments and music software. Hundreds of years of music compositions have been stored graphically in a notation form that allow musicians to see the notes and recreate the original music. Is it possible to use music to help the blind not only see but recreate a spatial perception that might be akin to 3D imaging?
Music has been said to calm, invigorate, and inspire. Found in the most ancient cultures, music has been at the core of communication and survival. Now music is being used to help the blind see, by converting light into music. In essence, music helps move us into different directions or dimensions.
DJ and computer software have been used to change music into expressions of light. In a new study, light is being converted to music that help the blind decipher a world around them.
When people become blind they generally do not lose the ability to hear or see at the brain’s level, they lose their ability to transmit the sensory signals from the periphery, due to damage from the eye.. Where and when there are no ensory signals detected the neuronal networks of the brain, there are no responses by the brain.
The concept of 3D, for example, is brought about through brain processing. Some people may not be able to see in 3D, as a result of a visual handicap. 3DTV requires normal people to use glasses to view the screen. Those sensory images are sent to the brain to evoke 3D perception. Those required glasses serve as a sensory substitution that convert your normal vision to interpret televised images in 3D. Using special substitution devices help meld our senses in many new ways. It may be used to take music and transform it into a type of vision.
Game systems, like Kinect, uses forms of sensory substitution.
Substituting senses for reinterpretation by the brain has been explored since the 1960’s. Sensory substitution devices (SSD) use sound or touch to help the visually impaired perceive the visual scene surrounding them. The ideal SSD would assist not only in sensing the environment but also in performing daily activities based on this input. Examples would include accurately reaching for a coffee cup, or shaking a friend’s hand. In a new study, scientists trained blindfolded sighted participants to perform fast and accurate movements using a new SSD, called EyeMusic.
The EyeMusic, developed by a team of researchers at the Hebrew University of Jerusalem, employs pleasant musical tones and scales to help the visually impaired “see” using music. This non-invasive SSD converts images into a combination of musical notes, or “soundscapes.”
The EyeMusic scans an image and represents pixels at high vertical locations as high-pitched musical notes and low vertical locations as low-pitched notes according to a musical scale that will sound pleasant in many possible combinations. The image is scanned continuously, from left to right, and an auditory cue is used to mark the start of the scan. The horizontal location of a pixel indicates the timing of the musical notes relative to the cue (the later it is sounded after the cue, the farther it is to the right), and the brightness is encoded by the loudness of the sound. The EyeMusic’s algorithm uses different musical instruments for each of the five colors: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), and yellow (violin). Black is represented by silence.
The study tested the ability of 18 blindfolded sighted individuals to perform movements guided by the EyeMusic, and compared those movements to those performed with visual guidance. At first, the blindfolded participants underwent a short familiarization session, where they learned to identify the location of a single object (a white square) or of two adjacent objects (a white and a blue square).
In the test sessions, participants used a stylus on a digitizing tablet to point to a white square located either in the north, the south, the east or the west. In one block of trials they were blindfolded (SSD block), and in the other block (VIS block) the arm was placed under an opaque cover, so they could see the screen but did not have direct visual feedback from the hand. The endpoint location of their hand was marked by a blue square. In the SSD block, they received feedback via the EyeMusic. In the VIS block, the feedback was visual.
The study lends support to the hypothesis that representation of space in the brain may not be dependent on the modality with which the spatial information is received, and that very little training is required to create a representation of space without vision, using sounds to guide fast and accurate movements. These results demonstrate the potential application of the EyeMusic in performing everyday tasks – from accurately reaching for the red (but not the green!) apples in the produce aisle, to, perhaps one day, playing a Kinect / Xbox game.
While the study used small samples in experimenting the thesis, results are positive that it may be possible to train visually challenged individuals to perceive a visual world through the use of music. With that perception, and the use of sophisticated SSD transducers, the blind may truly see in 3D.