Sensory Substitution and the Transparency of Visual Experience

Author(s):  
Barry C. Smith

Sensory substitution devices make use of information in one sensory modality to deliver information usually provided by another. But when information usually presented visually is presented to a subject in an auditory or haptic way, is the resulting experience in any sense visual? Or does sensory substitution show that dimensions of experience—about the spatial layout of objects and properties in the environment—that were previously taken to be essentially visual can be experienced in other modalities too? I will consider this question by looking at whether a property such as the transparency of visual experience can be transferred to, and enhance, experience in other modalities.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2021 ◽  
Author(s):  
Katarzyna Ciesla ◽  
T. Wolak ◽  
A. Lorens ◽  
H. Skarżyński ◽  
A. Amedi

Abstract Understanding speech in background noise is challenging. Wearing face-masks during COVID19-pandemics made it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on fingertips. After a short training session, participants significantly improved (16 out of 17) in speech-in-noise understanding, when added vibrations corresponded to low-frequencies extracted from the sentence. The level of understanding was maintained after training, when the loudness of the background noise doubled (mean group improvement of ~ 10 decibels). This result indicates that our solution can be very useful for the hearing-impaired patients. Even more interestingly, the improvement was transferred to a post-training situation when the touch input was removed, showing that we can apply the setup for auditory rehabilitation in cochlear implant-users. Future wearable implementations of our SSD can also be used in real-life situations, when talking on the phone or learning a foreign language. We discuss the basic science implications of our findings, such as we show that even in adulthood a new pairing can be established between a neuronal computation (speech processing) and an atypical sensory modality (tactile). Speech is indeed a multisensory signal, but learned from birth in an audio-visual context. Interestingly, adding lip reading cues to speech in noise provides benefit of the same or lower magnitude as we report here for adding touch.


Author(s):  
Michael J. Proulx ◽  
David J. Brown ◽  
Achille Pasqualotto

Vision is the default sensory modality for normal spatial navigation in humans. Touch is restricted to providing information about peripersonal space, whereas detecting and avoiding obstacles in extrapersonal space is key for efficient navigation. Hearing is restricted to the detection of objects that emit noise, yet many obstacles such as walls are silent. Sensory substitution devices provide a means of translating distal visual information into a form that visually impaired individuals can process through either touch or hearing. Here we will review findings from various sensory substitution systems for the processing of visual information that can be classified as what (object recognition), where (localization), and how (perception for action) processing. Different forms of sensory substitution excel at some tasks more than others. Spatial navigation brings together these different forms of information and provides a useful model for comparing sensory substitution systems, with important implications for rehabilitation, neuroanatomy, and theories of cognition.


2020 ◽  
pp. 1-26
Author(s):  
Louise P. Kirsch ◽  
Xavier Job ◽  
Malika Auvray

Abstract Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of ‘artificial synaesthesia’. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the ‘artificial synaesthesia’ view of sensory substitution should be rejected.


2017 ◽  
Vol 30 (6) ◽  
pp. 579-600 ◽  
Author(s):  
Gabriel Arnold ◽  
Jacques Pesnot-Lerousseau ◽  
Malika Auvray

Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user’s pre-existing sensory and cognitive capacities.


2018 ◽  
Vol 26 (3) ◽  
pp. 111-127 ◽  
Author(s):  
Weronika Kałwak ◽  
Magdalena Reuter ◽  
Marta Łukowska ◽  
Bartosz Majchrowicz ◽  
Michał Wierzchoń

Information that is normally accessed through a sensory modality (substituted modality, e.g., vision) is provided by sensory substitution devices (SSDs) through an alternative modality such as hearing or touch (i.e., substituting modality). SSDs usually support disabled users by replacing sensory inputs that have been lost, but they also offer a unique opportunity to study adaptation and flexibility in human perception. Current debates in sensory substitution (SS) literature focus mostly on its neural correlates and behavioural consequences. In particular, studies have demonstrated the neural plasticity of the visual brain regions that are activated by the substituting modality. Participants also adapt to using the devices for a broad spectrum of cognitive tasks that usually require sight. However, little is known about the SS experience. Also, there is no agreement on how the phenomenology of SS should be studied. Here, we offer guidelines for the methodology of studies investigating behavioural adaptation to SS and the effects of this adaptation on the subjective SS experience. We also discuss factors that may influence the results of SS studies: (1) the type of SSD, (2) the effects of training, (3) the role of sensory deprivation, (4) the role of the experimental environment, (5) the role of the tasks participants follow, and (6) the characteristics of the participants. In addition, we propose combining qualitative and quantitative methods and discuss how this should be achieved when studying the neural, behavioural, and experiential consequences of SS.


2019 ◽  
Author(s):  
AT Zai ◽  
S Cavé-Lopez ◽  
M Rolland ◽  
N Giret ◽  
RHR Hahnloser

AbstractSensory substitution is a promising therapeutic approach for replacing a missing or diseased sensory organ by translating inaccessible information into another sensory modality. What aspects of substitution are important such that subjects accept an artificial sense and that it benefits their voluntary action repertoire? To obtain an evolutionary perspective on affective valence implied in sensory substitution, we introduce an animal model of deaf songbirds. As a substitute of auditory feedback, we provide binary visual feedback. Deaf birds respond appetitively to song-contingent visual stimuli, they skillfully adapt their songs to increase the rate of visual stimuli, showing that auditory feedback is not required for making targeted changes to a vocal repertoire. We find that visually instructed song learning is basal-ganglia dependent. Because hearing birds respond aversively to the same visual stimuli, sensory substitution reveals a bias for actions that elicit feedback to meet animals’ manipulation drive, which has implications beyond rehabilitation.


Author(s):  
Jennifer Corns

Deroy and Auvray together with Ptito et al. have argued against what they dub ‘the perceptual assumption’, which they claim underlies all previous research into sensory substitution devices (SSDs). In this chapter, I argue that the perceptual assumption needs to be disambiguated in three distinct ways: (A) SSD use is best modelled as a known, ‘natural’ modality; (B) SSD use is best modelled as a unique sensory modality full stop; and (C) SSD use is best modelled as a perceptual process. Different theorists are variously committed to these distinct claims. More importantly, evaluating A, B, or C for rejection depends on distinct evidence of difference between SSD use and (A) each natural modality, (B) any modality, and (C) perceptual processing. I argue that even if the offered evidence of difference for A–C is granted, Auvray and Deroy’s advocated rejections are not entailed; it remains to be shown that the identified differences undermine the appropriate use of the corresponding models.


1980 ◽  
Vol 28 (3) ◽  
pp. 185-190 ◽  
Author(s):  
John J. Rieser ◽  
Jeffrey J. Lockman ◽  
Herbert L. Pick

i-Perception ◽  
10.1068/ic896 ◽  
2011 ◽  
Vol 2 (8) ◽  
pp. 896-896
Author(s):  
Lior Reich ◽  
Ella Striem-Amit ◽  
Marcin Szwed ◽  
Ornella Dakwar ◽  
Miri Guendelman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document