scholarly journals See ColOr: an extended sensory substitution device for the visually impaired

2014 ◽  
Vol 8 (2) ◽  
pp. 77-94 ◽  
Author(s):  
Juan D. Gomez ◽  
Guido Bologna ◽  
Thierry Pun

Purpose – The purpose of this paper is to overcome the limitations of sensory substitution methods (SSDs) to represent high-level or conceptual information involved in vision, which are mainly produced by the biological sensory mismatch between sight and substituting senses. Thus, provide the visually impaired with a more practical and functional SSD. Design/methodology/approach – Unlike any other approach, the SSD extends beyond a sensing prototype, by integrating computer vision methods to produce reliable knowledge about the physical world (at the lowest cost to the user). Importantly though, the authors do not abandon the typical encoding of low-level features into sound. The paper simply argues that any visual perception can be achieved through hearing needs to be reinforced or enhanced by techniques that lie beyond mere visual-to-audio mapping (e.g. computer vision, image processing). Findings – Experiments reported in this paper reveal that the See ColOr is learnable and functional, and provides easy interaction. In moderate time, participants were enabled to grasp visual information of the world out of which they could derive: spatial awareness, ability to find someone, location of daily objects and skill to walk safely avoiding obstacles. The encouraging results open a door toward autonomous mobility of the blind. Originality/value – The paper uses the “extended” approach to introduce and justify that the system is brand new, as well as the experimental studies on computer-vision extension of SSDs that are presented. Also, this is the first paper reporting on a terminated, integrated and functional system.

2016 ◽  
Vol 29 (4-5) ◽  
pp. 337-363 ◽  
Author(s):  
Giles Hamilton-Fletcher ◽  
Thomas D. Wright ◽  
Jamie Ward

Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.


Author(s):  
Michael J. Proulx ◽  
David J. Brown ◽  
Achille Pasqualotto

Vision is the default sensory modality for normal spatial navigation in humans. Touch is restricted to providing information about peripersonal space, whereas detecting and avoiding obstacles in extrapersonal space is key for efficient navigation. Hearing is restricted to the detection of objects that emit noise, yet many obstacles such as walls are silent. Sensory substitution devices provide a means of translating distal visual information into a form that visually impaired individuals can process through either touch or hearing. Here we will review findings from various sensory substitution systems for the processing of visual information that can be classified as what (object recognition), where (localization), and how (perception for action) processing. Different forms of sensory substitution excel at some tasks more than others. Spatial navigation brings together these different forms of information and provides a useful model for comparing sensory substitution systems, with important implications for rehabilitation, neuroanatomy, and theories of cognition.


2015 ◽  
Vol 9 (2) ◽  
pp. 71-85
Author(s):  
Catherine Todd ◽  
Swati Mallya ◽  
Sara Majeed ◽  
Jude Rojas ◽  
Katy Naylor

Purpose – VirtuNav is a haptic-, audio-enabled virtual reality simulator that facilitates persons with visual impairment to explore a 3D computer model of a real-life indoor location, such as a room or building. The purpose of this paper is to aid in pre-planning and spatial awareness, for a user to become more familiar with the environment prior to experiencing it in reality. Design/methodology/approach – The system offers two unique interfaces: a free-roam interface where the user can navigate, and an edit mode where the administrator can manage test users, maps and retrieve test data. Findings – System testing reveals that spatial awareness and memory mapping improve with user iterations within VirtuNav. Research limitations/implications – VirtuNav is a research tool for investigation of user familiarity developed after repeated exposure to the simulator, to determine the extent to which haptic and/or sound cues improve a visually impaired user’s ability to navigate a room or building with or without occlusion. Social implications – The application may prove useful for greater real world engagement: to build confidence in real world experiences, enabling persons with sight impairment to more comfortably and readily explore and interact with environments formerly unfamiliar or unattainable to them. Originality/value – VirtuNav is developed as a practical application offering several unique features including map design, semi-automatic 3D map reconstruction and object classification from 2D map data. Visual and haptic rendering of real-time 3D map navigation are provided as well as automated administrative functions for shortest path determination, actual path comparison, and performance indicator assessment: exploration time taken and collision data.


2018 ◽  
Vol 95 (9) ◽  
pp. 757-765 ◽  
Author(s):  
Rebekka Hoffmann ◽  
Simone Spagnol ◽  
Árni Kristjánsson ◽  
Runar Unnthorsson

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7351
Author(s):  
Dominik Osiński ◽  
Marta Łukowska ◽  
Dag Roar Hjelme ◽  
Michał Wierzchoń

The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250281
Author(s):  
Galit Buchs ◽  
Benedetta Haimler ◽  
Menachem Kerem ◽  
Shachar Maidenbaum ◽  
Liraz Braun ◽  
...  

Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2017 ◽  
Vol 51 (1) ◽  
pp. 82-98 ◽  
Author(s):  
Nina Åkestam ◽  
Sara Rosengren ◽  
Micael Dahlen

Purpose This paper aims to investigate whether portrayals of homosexuality in advertising can generate social effects in terms of consumer-perceived social connectedness and empathy. Design/methodology/approach In three experimental studies, the effects of advertising portrayals of homosexuality were compared to advertising portrayals of heterosexuality. Study 1 uses a thought-listing exercise to explore whether portrayals of homosexuality (vs heterosexuality) can evoke more other-related thoughts and whether such portrayals affect consumer-perceived social connectedness and empathy. Study 2 replicates the findings while introducing attitudes toward homosexuality as a boundary condition and measuring traditional advertising effects. Study 3 replicates the findings while controlling for gender, perceived similarity and targetedness. Findings The results show that portrayals of homosexuality in advertising can prime consumers to think about other people, thereby affecting them socially. In line with previous studies of portrayals of homosexuality in advertising, these effects are moderated by attitudes toward homosexuality. Research limitations/implications This paper adds to a growing body of literature on the potentially positive extended effects of advertising. They also challenge some of the previous findings regarding homosexuality in advertising. Practical implications The finding that portrayals of homosexuality in advertising can (at least, temporarily) affect consumers socially in terms of social connectedness and empathy should encourage marketers to explore the possibilities of creating advertising that benefits consumers and brands alike. Originality/value The paper challenges the idea that the extended effects of advertising have to be negative. By showing how portrayals of homosexuality can increase social connectedness and empathy, it adds to the discussion of the advantages and disadvantages of advertising on a societal level.


Motor Control ◽  
1999 ◽  
Vol 3 (3) ◽  
pp. 237-271 ◽  
Author(s):  
Jeroen B.J. Smeets ◽  
Eli Brenner

Reaching out for an object is often described as consisting of two components that are based on different visual information. Information about the object's position and orientation guides the hand to the object, while information about the object's shape and size determines how the fingers move relative to the thumb to grasp it. We propose an alternative description, which consists of determining suitable positions on the object—on the basis of its shape, surface roughness, and so on—and then moving one's thumb and fingers more or less independently to these positions. We modeled this description using a minimum-jerk approach, whereby the finger and thumb approach their respective target positions approximately orthogonally to the surface. Our model predicts how experimental variables such as object size, movement speed, fragility, and required accuracy will influence the timing and size of the maximum aperture of the hand. An extensive review of experimental studies on grasping showed that the predicted influences correspond to human behavior.


Sign in / Sign up

Export Citation Format

Share Document