scholarly journals Colorophone 2.0: A Wearable Color Sonification Device Generating Live Stereo-Soundscapes—Design, Implementation, and Usability Audit

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7351
Author(s):  
Dominik Osiński ◽  
Marta Łukowska ◽  
Dag Roar Hjelme ◽  
Michał Wierzchoń

The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.

2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Simone Spagnol ◽  
György Wersényi ◽  
Michał Bujacz ◽  
Oana Bălan ◽  
Marcelo Herrera Martínez ◽  
...  

Electronic travel aids (ETAs) have been in focus since technology allowed designing relatively small, light, and mobile devices for assisting the visually impaired. Since visually impaired persons rely on spatial audio cues as their primary sense of orientation, providing an accurate virtual auditory representation of the environment is essential. This paper gives an overview of the current state of spatial audio technologies that can be incorporated in ETAs, with a focus on user requirements. Most currently available ETAs either fail to address user requirements or underestimate the potential of spatial sound itself, which may explain, among other reasons, why no single ETA has gained a widespread acceptance in the blind community. We believe there is ample space for applying the technologies presented in this paper, with the aim of progressively bridging the gap between accessibility and accuracy of spatial audio in ETAs.


2016 ◽  
Vol 29 (4-5) ◽  
pp. 337-363 ◽  
Author(s):  
Giles Hamilton-Fletcher ◽  
Thomas D. Wright ◽  
Jamie Ward

Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.


Author(s):  
Michael J. Proulx ◽  
David J. Brown ◽  
Achille Pasqualotto

Vision is the default sensory modality for normal spatial navigation in humans. Touch is restricted to providing information about peripersonal space, whereas detecting and avoiding obstacles in extrapersonal space is key for efficient navigation. Hearing is restricted to the detection of objects that emit noise, yet many obstacles such as walls are silent. Sensory substitution devices provide a means of translating distal visual information into a form that visually impaired individuals can process through either touch or hearing. Here we will review findings from various sensory substitution systems for the processing of visual information that can be classified as what (object recognition), where (localization), and how (perception for action) processing. Different forms of sensory substitution excel at some tasks more than others. Spatial navigation brings together these different forms of information and provides a useful model for comparing sensory substitution systems, with important implications for rehabilitation, neuroanatomy, and theories of cognition.


2014 ◽  
Vol 8 (2) ◽  
pp. 77-94 ◽  
Author(s):  
Juan D. Gomez ◽  
Guido Bologna ◽  
Thierry Pun

Purpose – The purpose of this paper is to overcome the limitations of sensory substitution methods (SSDs) to represent high-level or conceptual information involved in vision, which are mainly produced by the biological sensory mismatch between sight and substituting senses. Thus, provide the visually impaired with a more practical and functional SSD. Design/methodology/approach – Unlike any other approach, the SSD extends beyond a sensing prototype, by integrating computer vision methods to produce reliable knowledge about the physical world (at the lowest cost to the user). Importantly though, the authors do not abandon the typical encoding of low-level features into sound. The paper simply argues that any visual perception can be achieved through hearing needs to be reinforced or enhanced by techniques that lie beyond mere visual-to-audio mapping (e.g. computer vision, image processing). Findings – Experiments reported in this paper reveal that the See ColOr is learnable and functional, and provides easy interaction. In moderate time, participants were enabled to grasp visual information of the world out of which they could derive: spatial awareness, ability to find someone, location of daily objects and skill to walk safely avoiding obstacles. The encouraging results open a door toward autonomous mobility of the blind. Originality/value – The paper uses the “extended” approach to introduce and justify that the system is brand new, as well as the experimental studies on computer-vision extension of SSDs that are presented. Also, this is the first paper reporting on a terminated, integrated and functional system.


2021 ◽  
Vol 21 (1) ◽  
pp. 3-10
Author(s):  
Ingmar BEŠIĆ ◽  
◽  
Zikrija AVDAGIĆ AVDAGIĆ ◽  
Kerim HODŽIĆ

Visual impairments often pose serious restrictions on a visually impaired person and there is a considerable number of persons, especially among aging population, which depend on assistive technology to sustain their quality of life. Development and testing of assistive technology for visually impaired requires gathering information and conducting studies on both healthy and visually impaired individuals in a controlled environment. We propose test setup for visually impaired persons by creating RFID based assistive environment – Visual Impairment Friendly RFID Room. The test setup can be used to evaluate RFID object localization and its use by visually impaired persons. To certain extent every impairment has individual characteristics as different individuals may better respond to different subsets of visual information. We use virtual reality prototype to both simulate visual impairment and map full visual information to the subset that visually impaired person can perceive. Time-domain color mapping real-time image processing is used to evaluate the virtual reality prototype targeting color vision deficiency.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250281
Author(s):  
Galit Buchs ◽  
Benedetta Haimler ◽  
Menachem Kerem ◽  
Shachar Maidenbaum ◽  
Liraz Braun ◽  
...  

Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.


2017 ◽  
Vol 7 (1) ◽  
pp. 42 ◽  
Author(s):  
Christopher Nkiko ◽  
Morayo I. Atinmo ◽  
Happiness Chijioke Michael-Onuoha ◽  
Julie E. Ilogho ◽  
Michael O. Fagbohun ◽  
...  

Studies have shown inadequate reading materials for the visually impaired in Nigeria. Information technology has greatly advanced the provision of information to the visually impaired in other industrialized climes. This study investigated the extent of application of information technology to the transcription of reading materials for the visually impaired in Nigeria. The study adopted survey research design of the ex-post facto to select 470 personnel as respondents. A questionnaire titled Information Technology Use Scale (α=0.74), and Interview Schedule (α=0.75), were used. Data were analyzed using descriptive statistics and Pearson Product Moment Correlation. The findings indicate that information technology in transcription was low and a significant positive relationship between application of information technology and transcription of information materials (r=0.62: p<0.05). The study recommended among others that Multi-National Corporations should be sensitized to extend their Corporate Social Responsibility (CSR) activities to help in procuring modern information technology devices and software to enhance transcription.


Sign in / Sign up

Export Citation Format

Share Document