Disambiguating the Perceptual Assumption

Author(s):  
Jennifer Corns

Deroy and Auvray together with Ptito et al. have argued against what they dub ‘the perceptual assumption’, which they claim underlies all previous research into sensory substitution devices (SSDs). In this chapter, I argue that the perceptual assumption needs to be disambiguated in three distinct ways: (A) SSD use is best modelled as a known, ‘natural’ modality; (B) SSD use is best modelled as a unique sensory modality full stop; and (C) SSD use is best modelled as a perceptual process. Different theorists are variously committed to these distinct claims. More importantly, evaluating A, B, or C for rejection depends on distinct evidence of difference between SSD use and (A) each natural modality, (B) any modality, and (C) perceptual processing. I argue that even if the offered evidence of difference for A–C is granted, Auvray and Deroy’s advocated rejections are not entailed; it remains to be shown that the identified differences undermine the appropriate use of the corresponding models.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


1969 ◽  
Vol 29 (3) ◽  
pp. 827-834 ◽  
Author(s):  
Viola Mecke

This was a pilot study to determine whether centrarion, as a perceptual process, could be a criterion for differentiating between neurologically impaired and emotionally disturbed children. Centration was defined by Piaget as a prolonged involuntary attachment of a sensory modality to one part of a visual field that, in turn, affects motor behavior, producing effects on drawing tasks by a separation of designs or their parts coincident with distortions. The neurologically impaired children were seen as having basic difficulties with perception whereas the emotionally disturbed children would have basic difficulties in intellection. Therefore, the centration-distortion error would characterize drawings of the neurologically impaired but not of the emotionally disturbed children. A sample of 12 for each group was selected, with EEG records, psychological tests and psychiatric interviews being used as defining criteria. The hypothesis was upheld for each child in the neurologically impaired group making at least three out of a possible four errors. Only one child in the emotionally disturbed group made a centration-distortion error.


2021 ◽  
Author(s):  
Katarzyna Ciesla ◽  
T. Wolak ◽  
A. Lorens ◽  
H. Skarżyński ◽  
A. Amedi

Abstract Understanding speech in background noise is challenging. Wearing face-masks during COVID19-pandemics made it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on fingertips. After a short training session, participants significantly improved (16 out of 17) in speech-in-noise understanding, when added vibrations corresponded to low-frequencies extracted from the sentence. The level of understanding was maintained after training, when the loudness of the background noise doubled (mean group improvement of ~ 10 decibels). This result indicates that our solution can be very useful for the hearing-impaired patients. Even more interestingly, the improvement was transferred to a post-training situation when the touch input was removed, showing that we can apply the setup for auditory rehabilitation in cochlear implant-users. Future wearable implementations of our SSD can also be used in real-life situations, when talking on the phone or learning a foreign language. We discuss the basic science implications of our findings, such as we show that even in adulthood a new pairing can be established between a neuronal computation (speech processing) and an atypical sensory modality (tactile). Speech is indeed a multisensory signal, but learned from birth in an audio-visual context. Interestingly, adding lip reading cues to speech in noise provides benefit of the same or lower magnitude as we report here for adding touch.


Author(s):  
Vivian Mizrahi

In this chapter, I argue that perceptual media like air or water are imperceptible. I show that, despite their lack of phenomenological features, perceptual media crucially affect what we see by selecting what is perceptually available to the perceiver. In the second part of the chapter, I argue that mirrors are visual media like air, water, and glass. According to this account, mirrors are transparent and invisible and cannot therefore have a distinctive look or appearance. In the last part of the chapter, I extend the general account of perceptual media to the sense organs themselves by showing that perceptual media not only include external entities causally involved in the perceptual process but also comprise the perceptual system itself.


Author(s):  
Michael J. Proulx ◽  
David J. Brown ◽  
Achille Pasqualotto

Vision is the default sensory modality for normal spatial navigation in humans. Touch is restricted to providing information about peripersonal space, whereas detecting and avoiding obstacles in extrapersonal space is key for efficient navigation. Hearing is restricted to the detection of objects that emit noise, yet many obstacles such as walls are silent. Sensory substitution devices provide a means of translating distal visual information into a form that visually impaired individuals can process through either touch or hearing. Here we will review findings from various sensory substitution systems for the processing of visual information that can be classified as what (object recognition), where (localization), and how (perception for action) processing. Different forms of sensory substitution excel at some tasks more than others. Spatial navigation brings together these different forms of information and provides a useful model for comparing sensory substitution systems, with important implications for rehabilitation, neuroanatomy, and theories of cognition.


2020 ◽  
pp. 1-26
Author(s):  
Louise P. Kirsch ◽  
Xavier Job ◽  
Malika Auvray

Abstract Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of ‘artificial synaesthesia’. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the ‘artificial synaesthesia’ view of sensory substitution should be rejected.


2017 ◽  
Vol 30 (6) ◽  
pp. 579-600 ◽  
Author(s):  
Gabriel Arnold ◽  
Jacques Pesnot-Lerousseau ◽  
Malika Auvray

Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user’s pre-existing sensory and cognitive capacities.


2018 ◽  
Vol 26 (3) ◽  
pp. 111-127 ◽  
Author(s):  
Weronika Kałwak ◽  
Magdalena Reuter ◽  
Marta Łukowska ◽  
Bartosz Majchrowicz ◽  
Michał Wierzchoń

Information that is normally accessed through a sensory modality (substituted modality, e.g., vision) is provided by sensory substitution devices (SSDs) through an alternative modality such as hearing or touch (i.e., substituting modality). SSDs usually support disabled users by replacing sensory inputs that have been lost, but they also offer a unique opportunity to study adaptation and flexibility in human perception. Current debates in sensory substitution (SS) literature focus mostly on its neural correlates and behavioural consequences. In particular, studies have demonstrated the neural plasticity of the visual brain regions that are activated by the substituting modality. Participants also adapt to using the devices for a broad spectrum of cognitive tasks that usually require sight. However, little is known about the SS experience. Also, there is no agreement on how the phenomenology of SS should be studied. Here, we offer guidelines for the methodology of studies investigating behavioural adaptation to SS and the effects of this adaptation on the subjective SS experience. We also discuss factors that may influence the results of SS studies: (1) the type of SSD, (2) the effects of training, (3) the role of sensory deprivation, (4) the role of the experimental environment, (5) the role of the tasks participants follow, and (6) the characteristics of the participants. In addition, we propose combining qualitative and quantitative methods and discuss how this should be achieved when studying the neural, behavioural, and experiential consequences of SS.


2019 ◽  
Author(s):  
AT Zai ◽  
S Cavé-Lopez ◽  
M Rolland ◽  
N Giret ◽  
RHR Hahnloser

AbstractSensory substitution is a promising therapeutic approach for replacing a missing or diseased sensory organ by translating inaccessible information into another sensory modality. What aspects of substitution are important such that subjects accept an artificial sense and that it benefits their voluntary action repertoire? To obtain an evolutionary perspective on affective valence implied in sensory substitution, we introduce an animal model of deaf songbirds. As a substitute of auditory feedback, we provide binary visual feedback. Deaf birds respond appetitively to song-contingent visual stimuli, they skillfully adapt their songs to increase the rate of visual stimuli, showing that auditory feedback is not required for making targeted changes to a vocal repertoire. We find that visually instructed song learning is basal-ganglia dependent. Because hearing birds respond aversively to the same visual stimuli, sensory substitution reveals a bias for actions that elicit feedback to meet animals’ manipulation drive, which has implications beyond rehabilitation.


Author(s):  
Jonardon Ganeri

The term ‘mind’ (mano) is used in a confused range of different and contradictory senses in the early Pāli canon. Buddhaghosa will impose order by distinguishing distinct cognitive modules, each with its proper domain of cognitive work. Early perception, the subliminal orienting, and initial reception of a stimulus into the perceptual process, is the function of ‘mind-element’ (mano-dhātu), a low-level cognitive system. Late perception and working memory is the function of a high-level cognitive system, ‘mind-discrimination-element’ (mano-viññāṇa-dhātu). In deference to ancient Buddhist tradition, Buddhaghosa refers to six sense-modalities, the sixth being called ‘mind’ (mano). Just as each of the five types of sensory datum enters perceptual processing though a proprietary sense-door, so the objects of mind enter through a ‘mind-door’. However, this is not a sixth channel, a window onto a proprietary sort of mental object, but is nothing other than the door gating projection into short-term working memory.


Sign in / Sign up

Export Citation Format

Share Document