scholarly journals Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device

2016 ◽  
Vol 29 (4-5) ◽  
pp. 337-363 ◽  
Author(s):  
Giles Hamilton-Fletcher ◽  
Thomas D. Wright ◽  
Jamie Ward

Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

Author(s):  
Michael J. Proulx ◽  
David J. Brown ◽  
Achille Pasqualotto

Vision is the default sensory modality for normal spatial navigation in humans. Touch is restricted to providing information about peripersonal space, whereas detecting and avoiding obstacles in extrapersonal space is key for efficient navigation. Hearing is restricted to the detection of objects that emit noise, yet many obstacles such as walls are silent. Sensory substitution devices provide a means of translating distal visual information into a form that visually impaired individuals can process through either touch or hearing. Here we will review findings from various sensory substitution systems for the processing of visual information that can be classified as what (object recognition), where (localization), and how (perception for action) processing. Different forms of sensory substitution excel at some tasks more than others. Spatial navigation brings together these different forms of information and provides a useful model for comparing sensory substitution systems, with important implications for rehabilitation, neuroanatomy, and theories of cognition.


2014 ◽  
Vol 8 (2) ◽  
pp. 77-94 ◽  
Author(s):  
Juan D. Gomez ◽  
Guido Bologna ◽  
Thierry Pun

Purpose – The purpose of this paper is to overcome the limitations of sensory substitution methods (SSDs) to represent high-level or conceptual information involved in vision, which are mainly produced by the biological sensory mismatch between sight and substituting senses. Thus, provide the visually impaired with a more practical and functional SSD. Design/methodology/approach – Unlike any other approach, the SSD extends beyond a sensing prototype, by integrating computer vision methods to produce reliable knowledge about the physical world (at the lowest cost to the user). Importantly though, the authors do not abandon the typical encoding of low-level features into sound. The paper simply argues that any visual perception can be achieved through hearing needs to be reinforced or enhanced by techniques that lie beyond mere visual-to-audio mapping (e.g. computer vision, image processing). Findings – Experiments reported in this paper reveal that the See ColOr is learnable and functional, and provides easy interaction. In moderate time, participants were enabled to grasp visual information of the world out of which they could derive: spatial awareness, ability to find someone, location of daily objects and skill to walk safely avoiding obstacles. The encouraging results open a door toward autonomous mobility of the blind. Originality/value – The paper uses the “extended” approach to introduce and justify that the system is brand new, as well as the experimental studies on computer-vision extension of SSDs that are presented. Also, this is the first paper reporting on a terminated, integrated and functional system.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7351
Author(s):  
Dominik Osiński ◽  
Marta Łukowska ◽  
Dag Roar Hjelme ◽  
Michał Wierzchoń

The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250281
Author(s):  
Galit Buchs ◽  
Benedetta Haimler ◽  
Menachem Kerem ◽  
Shachar Maidenbaum ◽  
Liraz Braun ◽  
...  

Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.


Author(s):  
Kavita Pandey ◽  
Dhiraj Pandey ◽  
Vatsalya Yadav ◽  
Shriya Vikhram

Background: According to the WHO report, around 4.07% of the world's population is visually impaired. About 90% of the visually impaired users live in the lower economic strata. In the fast moving technology, most of the invention misses the need of these people. Mainly the technologies were designed for mainstream people; visually impaired people always find an inability to access it. This inability arises primarily for reasons such as cost, for example, Perkins Brailler costs 80-248 dollars for the simple purpose of Braille input. Another major reason is the hassle of carrying the big equipment. Objective: Keeping all this in mind and making technology as their best friends, MAGIC-1 has been designed. The goal is to provide a solution in terms of an application, which helps the visually impaired user in their daily life activities. Method: The proposed solution assists visually impaired users through smart phone technology. If visually impaired users ever wished to have a touched guide into a smart phone, MAGIC-1 has the solution that consolidates all the important features in their daily activities. Results: The performance of the solution as a whole and its individual features in terms of usability, utility and other metrics, etc. has been tested with sample visually impaired users. Moreover, their performances in term of Errors per Word and Words per Minute have been observed. Conclusion: MAGIC-I, the proposed solution works as an assistant of visually impaired users to overcome their daily struggles and stay more connected to the world. A visually impaired user can communicate via their mobile devices with features like eyes free texting using braille, voice calling etc. They can easily take help in an emergency situation with the options of SOS emergency calling and video assistance.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2017 ◽  
Vol 11 ◽  
Author(s):  
Steven D. Shirk ◽  
Donald G. McLaren ◽  
Jessica S. Bloomfield ◽  
Alex Powers ◽  
Alec Duffy ◽  
...  

2017 ◽  
Vol 118 (4) ◽  
pp. 2458-2469 ◽  
Author(s):  
Wei Song Ong ◽  
Koorosh Mirpour ◽  
James W. Bisley

We can search for and locate specific objects in our environment by looking for objects with similar features. Object recognition involves stimulus similarity responses in ventral visual areas and task-related responses in prefrontal cortex. We tested whether neurons in the lateral intraparietal area (LIP) of posterior parietal cortex could form an intermediary representation, collating information from object-specific similarity map representations to allow general decisions about whether a stimulus matches the object being looked for. We hypothesized that responses to stimuli would correlate with how similar they are to a sample stimulus. When animals compared two peripheral stimuli to a sample at their fovea, the response to the matching stimulus was similar, independent of the sample identity, but the response to the nonmatch depended on how similar it was to the sample: the more similar, the greater the response to the nonmatch stimulus. These results could not be explained by task difficulty or confidence. We propose that LIP uses its known mechanistic properties to integrate incoming visual information, including that from the ventral stream about object identity, to create a dynamic representation that is concise, low dimensional, and task relevant and that signifies the choice priorities in mental matching behavior. NEW & NOTEWORTHY Studies in object recognition have focused on the ventral stream, in which neurons respond as a function of how similar a stimulus is to their preferred stimulus, and on prefrontal cortex, where neurons indicate which stimulus is being looked for. We found that parietal area LIP uses its known mechanistic properties to form an intermediary representation in this process. This creates a perceptual similarity map that can be used to guide decisions in prefrontal areas.


2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Moritz Köster ◽  
Holger Finger ◽  
Sebastian Graetz ◽  
Maren Kater ◽  
Thomas Gruber

Sign in / Sign up

Export Citation Format

Share Document