The Processing of What, Where, and How

Author(s):  
Michael J. Proulx ◽  
David J. Brown ◽  
Achille Pasqualotto

Vision is the default sensory modality for normal spatial navigation in humans. Touch is restricted to providing information about peripersonal space, whereas detecting and avoiding obstacles in extrapersonal space is key for efficient navigation. Hearing is restricted to the detection of objects that emit noise, yet many obstacles such as walls are silent. Sensory substitution devices provide a means of translating distal visual information into a form that visually impaired individuals can process through either touch or hearing. Here we will review findings from various sensory substitution systems for the processing of visual information that can be classified as what (object recognition), where (localization), and how (perception for action) processing. Different forms of sensory substitution excel at some tasks more than others. Spatial navigation brings together these different forms of information and provides a useful model for comparing sensory substitution systems, with important implications for rehabilitation, neuroanatomy, and theories of cognition.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2020 ◽  
Vol 11 ◽  
Author(s):  
Crescent Jicol ◽  
Tayfun Lloyd-Esenkaya ◽  
Michael J. Proulx ◽  
Simon Lange-Smith ◽  
Meike Scheller ◽  
...  

1998 ◽  
Vol 10 (5) ◽  
pp. 581-589 ◽  
Author(s):  
Elisabetta Làdavas ◽  
Giuseppe di Pellegrino ◽  
Alessandro Farnè ◽  
Gabriele Zeloni

Current interpretations of extinction suggest that the disorder is due to an unbalanced competition between ipsilesional and contralesional representations of space. The question addressed in this study is whether the competition between left and right representations of space in one sensory modality (i.e., touch) can be reduced or exacerbated by the activation of an intact spatial representation in a different modality that is functionally linked to the damaged representation (i.e., vision). This hypothesis was tested in 10 right-hemisphere lesioned patients who suffered from reliable tactile extinction. We found that a visual stimulus presented near the patient's ipsilesional hand (i.e., visual peripersonal space) inhibited the processing of a tactile stimulus delivered on the contralesional hand (cross-modal visuotactile extinction) to the same extent as did an ipsilesional tactile stimulation (unimodal tactile extinction). It was also found that a visual stimulus presented near the contralesional hand improved the detection of a tactile stimulus applied to the same hand. In striking contrast, less modulatory effects of vision on touch perception were observed when a visual stimulus was presented far from the space immediately around the patient's hand (i.e., extrapersonal space). This study clearly demonstrates the existence of a visual peripersonal space centered on the hand in humans and its modulatory effects on tactile perception. These findings are explained by referring to the activity of bimodal neurons in premotor and parietal cortex of macaque, which have tactile receptive fields on the hand and corresponding visual receptive fields in the space immediately adjacent to the tactile fields.


2016 ◽  
Vol 29 (4-5) ◽  
pp. 337-363 ◽  
Author(s):  
Giles Hamilton-Fletcher ◽  
Thomas D. Wright ◽  
Jamie Ward

Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.


2021 ◽  
pp. 101-116
Author(s):  
Catherine L. Reed ◽  
George D. Park

Human perceptual and attentional systems operate to help us perform functional and adaptive actions in the world around us. In this review, we consider different regions of peripersonal space—peri-hand space, reachable space, and tool space when used in both peri- and extrapersonal space. Focusing on behavioural and electrophysiology/event-related potentials (EEG/ERP) studies using comparable target detection paradigms, we examine how visuospatial attention is facilitated or differentiated due to the current proximity and functional capabilities of our hands and the tools we hold in them. The functionality of the hand and tool is defined by the action goals of the user and the available functional affordances or parts available to achieve the goals. Finally, we report recent tool-use studies examining how the distribution of attention to tool space can change as a result of tool functionality and directional action crossing peripersonal and extrapersonal space boundaries. We propose that the functional capabilities of the hand and tools direct attention to action-relevant regions of peripersonal space. Although neural mechanisms such as bimodal neurons may enhance the processing of visual information presented in near-hand regions of peripersonal space, functional experience and the relevance of the space for upcoming actions more strongly direct attention within regions of peripersonal space. And, while some aspects of functionality can be extended into extrapersonal space, the multimodal nature of peripersonal space allows it to be more modifiable in the service of action.


2014 ◽  
Vol 8 (2) ◽  
pp. 77-94 ◽  
Author(s):  
Juan D. Gomez ◽  
Guido Bologna ◽  
Thierry Pun

Purpose – The purpose of this paper is to overcome the limitations of sensory substitution methods (SSDs) to represent high-level or conceptual information involved in vision, which are mainly produced by the biological sensory mismatch between sight and substituting senses. Thus, provide the visually impaired with a more practical and functional SSD. Design/methodology/approach – Unlike any other approach, the SSD extends beyond a sensing prototype, by integrating computer vision methods to produce reliable knowledge about the physical world (at the lowest cost to the user). Importantly though, the authors do not abandon the typical encoding of low-level features into sound. The paper simply argues that any visual perception can be achieved through hearing needs to be reinforced or enhanced by techniques that lie beyond mere visual-to-audio mapping (e.g. computer vision, image processing). Findings – Experiments reported in this paper reveal that the See ColOr is learnable and functional, and provides easy interaction. In moderate time, participants were enabled to grasp visual information of the world out of which they could derive: spatial awareness, ability to find someone, location of daily objects and skill to walk safely avoiding obstacles. The encouraging results open a door toward autonomous mobility of the blind. Originality/value – The paper uses the “extended” approach to introduce and justify that the system is brand new, as well as the experimental studies on computer-vision extension of SSDs that are presented. Also, this is the first paper reporting on a terminated, integrated and functional system.


2017 ◽  
Vol 30 (6) ◽  
pp. 579-600 ◽  
Author(s):  
Gabriel Arnold ◽  
Jacques Pesnot-Lerousseau ◽  
Malika Auvray

Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user’s pre-existing sensory and cognitive capacities.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7351
Author(s):  
Dominik Osiński ◽  
Marta Łukowska ◽  
Dag Roar Hjelme ◽  
Michał Wierzchoń

The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250281
Author(s):  
Galit Buchs ◽  
Benedetta Haimler ◽  
Menachem Kerem ◽  
Shachar Maidenbaum ◽  
Liraz Braun ◽  
...  

Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.


2021 ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promises in the range of behavioral abilities they allow, the processes underlying their use remains underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants’ auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.HighlightsTrained people spontaneously use processes shared with vision when hearing sounds from the deviceProcesses with conversion devices find roots both in vision and auditionTraining with a visual-to-auditory conversion device induces perceptual plasticity


Sign in / Sign up

Export Citation Format

Share Document