scholarly journals Sensory Substitution and Augmentation: An Introduction

Author(s):  
Fiona Macpherson

In this essay I outline the main questions and the debates about sensory substitution and augmentation devices. I describe the two most studied modern sensory substitution devices (TVSS and the vOICe) and one sensory augmentation device (the feelSpace belt). I discuss whether use of these devices gives rise to new sensory experiences of objects or just new perceptual judgements about objects. Then, on the assumption that new sensory experiences are being had, I consider what sensory modality is operative—the substituted or the substituting one, or another altogether. I examine the evidence concerning whether the experiences had in sensory substitution are of a two- or a three-dimensional world, and about the nature of those experiences with respect to whether colour is represented in them. I consider whether there are any limits to what information or what experiences can be given via sensory substitution. And I discuss whether the results from sensory substitution experiments can be used to support certain theories of perception at the expense of rivals. Furthermore, the practical use of sensory substitution and augmentation devices is considered. Finally, I provide a brief overview of the rest of the essays that this volume contains and the host of further interesting issues that the authors consider and address.

Author(s):  
Brian Glenney

The Molyneux problem asks whether a newly sighted person might immediately identify shapes previously known only to touch, like cubes and spheres, by sight alone. Over three centuries ago, the designer, William Molyneux, a Fellow of the Royal Society living in Ireland, conveyed the problem in a series of letters to John Locke. Locke soon published the problem and Molyneux’s own ‘not’ answer, in the second edition of his famous work, An Essay Concerning Human Understanding. Molyneux reasoned that the newly sighted person would fail for having no way to know that the newly seen shapes were like the felt shapes; the feel of the cube corner would not at all be like the look of the cube corner. Many philosophers have agreed with Molyneux’s ‘not’, arguing either that each sense produces concepts unique to it or that new sensory experiences, like those of newly sighted people, are too primitive for identifying three-dimensional shapes. Additionally, early experiments on subjects who have had cataracts surgically removed seem to confirm Molyneux’s supposition, as the newly sighted do not immediately identify shapes known to them by touch. More recent empirical experiments on cataract surgery subjects, newborns, and with technological innovations like sensory substitution devices, suggest support for a ‘yes’ answer to the question, inspiring philosophical and psychological accounts of perception that explain how the newly sighted might succeed in recognizing three-dimensional spatial features by sight.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2019 ◽  
Vol 374 (1787) ◽  
pp. 20190029 ◽  
Author(s):  
Poortata Lalwani ◽  
David Brang

In synaesthesia, stimulation of one sensory modality evokes additional experiences in another modality (e.g. sounds evoking colours). Along with these cross-sensory experiences, there are several cognitive and perceptual differences between synaesthetes and non-synaesthetes. For example, synaesthetes demonstrate enhanced imagery, increased cortical excitability and greater perceptual sensitivity in the concurrent modality. Previous models suggest that synaesthesia results from increased connectivity between corresponding sensory regions or disinhibited feedback from higher cortical areas. While these models explain how one sense can evoke qualitative experiences in another, they fail to predict the broader phenotype of differences observed in synaesthetes. Here, we propose a novel model of synaesthesia based on the principles of stochastic resonance. Specifically, we hypothesize that synaesthetes have greater neural noise in sensory regions, which allows pre-existing multisensory pathways to elicit supra-threshold activation (i.e. synaesthetic experiences). The strengths of this model are (a) it predicts the broader cognitive and perceptual differences in synaesthetes, (b) it provides a unified framework linking developmental and induced synaesthesias, and (c) it explains why synaesthetic associations are inconsistent at onset but stabilize over time. We review research consistent with this model and propose future studies to test its limits. This article is part of a discussion meeting issue ‘Bridging senses: novel insights from synaesthesia’.


2020 ◽  
Vol 12 (2) ◽  
Author(s):  
Mary E Stewart ◽  
Natalie Russo ◽  
Jennifer Banks ◽  
Louisa Miller ◽  
Jacob A Burack

In this paper, we review evidence regarding differences in the types of sensory experiences  of persons with ASD with respect to both unisensory and multisensory processing. We discuss self- reports, carer questionnaires as well as perceptual processing differences found in the laboratory.  Incoming information is processed through one or more of our senses and fundamental differences in the processing of information from any sensory modality or combination of sensory modalities are likely to have cascading effects on the way individuals with ASD experience the world around them, effects that can have both positive and negative impact on a individual with ASD’s quality of life.


Author(s):  
Yu. Shchehelska

<p><em>This study elucidates the main communication issues that arise from audiences’ interaction with three-dimensional animation of different types in augmented reality, as well as identifies the major 3D animations’ varieties used by brands to create AR and MR promotional campaigns. </em></p><p><em>The results of the study are based, in particular, on the analysis of AR cases of 27 commercial and social brands that used 3D animation for promotional purposes in 2010–2019.</em></p><p><em>It is ascertained that in the promotional practice there is used 3D predefined animation of a cartoon type, as well as 3D predefined and procedural non-homomorphic photorealistic animation. At the same time, three-dimensional procedural animation of cartoon type, as well as photorealistic animation of people (either predefined or procedural), was not used by any of the studied brands for the purpose of promotion.</em></p><p><em>The research revealed that in the field of promotion three-dimensional photorealistic animation of people, primarily of procedural type, is not used because it creates the majority of communication problems in the interaction of the audience with it. Real people’s displeasure with the animated ones arises, first of all, because of the “uncanny valley effect”, which is caused, in particular, by the technical difficulties with 3D rendering of human emotions and body language in real-time (including proxemics in a virtual environment); visual tracking of human movements by animated character; the naturalness and synchronicity of the language (above all, the content of the cues) and the sound of the voice of three-dimensional persons (its timbre, rhythmics, emotionality).</em></p><p><em>In general, today from a technical point of view photorealistic non-homomorphic animation is the most advanced 3D animation type, which explains the popularity of its use in the practice of promotional communications. Its predefined variety is most commonly used by automotive brands to create AR-campaigns, whereas procedural one is used in creating MR-campaigns, mainly for cosmetic and interior brands.</em></p><p><em>The predefined 3D animation of cartoon type was used to promote those commercial brands, which final consumers were, above all, children. However, some companies have used this kind of animation to create AR-based adult promotional events held in conjunction with the holiday and symbolic dates. The popularity of the use of 3D animation of cartoon type in the field of promotion is explained, first of all, by the fact that people at a subconscious level have a positive attitude towards cartoon characters as such.</em></p><strong><em>Key words:</em></strong><em> augmented reality (AR), mixed reality (MR), 3D animation, promotional communications.</em>


Author(s):  
Mariacarla Memeo ◽  
Marco Jacono ◽  
Giulio Sandini ◽  
Luca Brayda

Abstract Background In this work, we present a novel sensory substitution system that enables to learn three dimensional digital information via touch when vision is unavailable. The system is based on a mouse-shaped device, designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. The device hosts a tactile actuator with three degrees of freedom: elevation, roll and pitch. The actuator approximates the tactile interaction with a plane tangential to the contact point between the finger and the field. Spatial information can therefore be mentally constructed by integrating local and global tactile cues: the actuator provides local cues, whereas proprioception associated with the mouse motion provides the global cues. Methods The efficacy of the system is measured by a virtual/real object-matching task. Twenty-four gender and age-matched participants (one blind and one blindfolded sighted group) matched a tactile dictionary of virtual objects with their 3D-printed solid version. The exploration of the virtual objects happened in three conditions, i.e., with isolated or combined height and inclination cues. We investigated the performance and the mental cost of approximating virtual objects in these tactile conditions. Results In both groups, elevation and inclination cues were sufficient to recognize the tactile dictionary, but their combination worked at best. The presence of elevation decreased a subjective estimate of mental effort. Interestingly, only visually impaired participants were aware of their performance and were able to predict it. Conclusions The proposed technology could facilitate the learning of science, engineering and mathematics in absence of vision, being also an industrial low-cost solution to make graphical user interfaces accessible for people with vision loss.


2021 ◽  
Author(s):  
Katarzyna Ciesla ◽  
T. Wolak ◽  
A. Lorens ◽  
H. Skarżyński ◽  
A. Amedi

Abstract Understanding speech in background noise is challenging. Wearing face-masks during COVID19-pandemics made it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on fingertips. After a short training session, participants significantly improved (16 out of 17) in speech-in-noise understanding, when added vibrations corresponded to low-frequencies extracted from the sentence. The level of understanding was maintained after training, when the loudness of the background noise doubled (mean group improvement of ~ 10 decibels). This result indicates that our solution can be very useful for the hearing-impaired patients. Even more interestingly, the improvement was transferred to a post-training situation when the touch input was removed, showing that we can apply the setup for auditory rehabilitation in cochlear implant-users. Future wearable implementations of our SSD can also be used in real-life situations, when talking on the phone or learning a foreign language. We discuss the basic science implications of our findings, such as we show that even in adulthood a new pairing can be established between a neuronal computation (speech processing) and an atypical sensory modality (tactile). Speech is indeed a multisensory signal, but learned from birth in an audio-visual context. Interestingly, adding lip reading cues to speech in noise provides benefit of the same or lower magnitude as we report here for adding touch.


Author(s):  
Michael J. Proulx ◽  
David J. Brown ◽  
Achille Pasqualotto

Vision is the default sensory modality for normal spatial navigation in humans. Touch is restricted to providing information about peripersonal space, whereas detecting and avoiding obstacles in extrapersonal space is key for efficient navigation. Hearing is restricted to the detection of objects that emit noise, yet many obstacles such as walls are silent. Sensory substitution devices provide a means of translating distal visual information into a form that visually impaired individuals can process through either touch or hearing. Here we will review findings from various sensory substitution systems for the processing of visual information that can be classified as what (object recognition), where (localization), and how (perception for action) processing. Different forms of sensory substitution excel at some tasks more than others. Spatial navigation brings together these different forms of information and provides a useful model for comparing sensory substitution systems, with important implications for rehabilitation, neuroanatomy, and theories of cognition.


2010 ◽  
Vol 10 (04) ◽  
pp. 531-544 ◽  
Author(s):  
FLORIAN DRAMAS ◽  
SIMON J. THORPE ◽  
CHRISTOPHE JOUFFRAIS

Although artificial vision systems could potentially provide very useful input to assistive devices for blind people, such devices are rarely used outside of laboratory experiments. Many current systems attempt to reproduce the visual image via an alternative sensory modality (often auditory or somatosensory), but this dominant "scoreboard" approach, is often difficult to interpret for the user. Here, we propose to offload the recognition problem onto a separate image processing system that then provides the user with just the essential information about the location of objects in the surrounding environment. Specifically, we show that a bio-inspired image processing algorithm (SpikeNet) can not only robustly, precisely, and rapidly recognize and locate key objects in the image, but also in space if the objects are in a stereoscopic field of view. In addition, the bio-inspired algorithm allows real-time calculation of optic flow. We hence propose that this system, coupled with a restitution interface allowing localization in space (i.e. three-dimensional virtual sounds synthesis) can be used to restore essential visuomotor behaviors such as grasping desired objects and navigating (finding directions, avoiding obstacles) in unknown environments.


2020 ◽  
pp. 1-26
Author(s):  
Louise P. Kirsch ◽  
Xavier Job ◽  
Malika Auvray

Abstract Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of ‘artificial synaesthesia’. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the ‘artificial synaesthesia’ view of sensory substitution should be rejected.


Sign in / Sign up

Export Citation Format

Share Document