scholarly journals Sensory substitution reveals a manipulation bias

2019 ◽  
Author(s):  
AT Zai ◽  
S Cavé-Lopez ◽  
M Rolland ◽  
N Giret ◽  
RHR Hahnloser

AbstractSensory substitution is a promising therapeutic approach for replacing a missing or diseased sensory organ by translating inaccessible information into another sensory modality. What aspects of substitution are important such that subjects accept an artificial sense and that it benefits their voluntary action repertoire? To obtain an evolutionary perspective on affective valence implied in sensory substitution, we introduce an animal model of deaf songbirds. As a substitute of auditory feedback, we provide binary visual feedback. Deaf birds respond appetitively to song-contingent visual stimuli, they skillfully adapt their songs to increase the rate of visual stimuli, showing that auditory feedback is not required for making targeted changes to a vocal repertoire. We find that visually instructed song learning is basal-ganglia dependent. Because hearing birds respond aversively to the same visual stimuli, sensory substitution reveals a bias for actions that elicit feedback to meet animals’ manipulation drive, which has implications beyond rehabilitation.

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Anja T. Zai ◽  
Sophie Cavé-Lopez ◽  
Manon Rolland ◽  
Nicolas Giret ◽  
Richard H. R. Hahnloser

AbstractSensory substitution is a promising therapeutic approach for replacing a missing or diseased sensory organ by translating inaccessible information into another sensory modality. However, many substitution systems are not well accepted by subjects. To explore the effect of sensory substitution on voluntary action repertoires and their associated affective valence, we study deaf songbirds to which we provide visual feedback as a substitute of auditory feedback. Surprisingly, deaf birds respond appetitively to song-contingent binary visual stimuli. They skillfully adapt their songs to increase the rate of visual stimuli, showing that auditory feedback is not required for making targeted changes to vocal repertoires. We find that visually instructed song learning is basal-ganglia dependent. Because hearing birds respond aversively to the same visual stimuli, sensory substitution reveals a preference for actions that elicit sensory feedback over actions that do not, suggesting that substitution systems should be designed to exploit the drive to manipulate.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2021 ◽  
Author(s):  
Judith M. Varkevisser ◽  
Ralph Simon ◽  
Ezequiel Mendoza ◽  
Martin How ◽  
Idse van Hijlkema ◽  
...  

AbstractBird song and human speech are learned early in life and for both cases engagement with live social tutors generally leads to better learning outcomes than passive audio-only exposure. Real-world tutor–tutee relations are normally not uni- but multimodal and observations suggest that visual cues related to sound production might enhance vocal learning. We tested this hypothesis by pairing appropriate, colour-realistic, high frame-rate videos of a singing adult male zebra finch tutor with song playbacks and presenting these stimuli to juvenile zebra finches (Taeniopygia guttata). Juveniles exposed to song playbacks combined with video presentation of a singing bird approached the stimulus more often and spent more time close to it than juveniles exposed to audio playback only or audio playback combined with pixelated and time-reversed videos. However, higher engagement with the realistic audio–visual stimuli was not predictive of better song learning. Thus, although multimodality increased stimulus engagement and biologically relevant video content was more salient than colour and movement equivalent videos, the higher engagement with the realistic audio–visual stimuli did not lead to enhanced vocal learning. Whether the lack of three-dimensionality of a video tutor and/or the lack of meaningful social interaction make them less suitable for facilitating song learning than audio–visual exposure to a live tutor remains to be tested.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer ◽  
Christoph Kayser

AbstractTo organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one visual stimulus, hence paving the way to study multisensory perception under more naturalistic settings with multiple signals per sensory modality. We used a cursor-control task in which proprioceptive information on the endpoint of a reaching movement was complemented by two visual stimuli providing additional information on the movement endpoint. The visual stimuli were briefly shown, one synchronously with the hand reaching the movement endpoint, the other delayed. In Experiment 1, the judgments of hand movement endpoint revealed integration and recalibration biases oriented towards the position of the synchronous stimulus and away from the delayed one. In Experiment 2 we contrasted two alternative accounts: that only the temporally more proximal visual stimulus enters integration similar to a winner-takes-all process, or that the influences of both stimuli superpose. The proprioceptive biases revealed that integration—and likely also recalibration—are shaped by the superposed contributions of multiple stimuli rather than by only the most powerful individual one.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Jie Zang ◽  
Shenquan Liu

Anterior forebrain pathway (AFP), a basal ganglia-dorsal forebrain circuit, significantly impacts birdsong, specifically in juvenile or deaf birds. Despite many physiological experiments supporting AFP’s role in song production, the mechanism underlying it remains poorly understood. Using a computational model of the anterior forebrain pathway and song premotor pathway, we examined the dynamic process and exact role of AFP during song learning and distorted auditory feedback (DAF). Our simulation suggests that AFP can adjust the premotor pathway structure and syllables based on its delayed input to the robust nucleus of the archistriatum (RA). It is also indicated that the adjustment to the synaptic conductance in the song premotor pathway has two phases: normal phases where the adjustment decreases with an increasing number of trials and abnormal phases where the adjustment remains stable or even increases. These two phases alternate and impel a specific effect on birdsong based on AFP’s specific structures, which may be associated with auditory feedback. Furthermore, our model captured some characteristics shown in birdsong experiments, such as similarities in pitch, intensity, and duration to real birds and the highly abnormal features of syllables during DAF.


2000 ◽  
Vol 84 (3) ◽  
pp. 1204-1223 ◽  
Author(s):  
Todd W. Troyer ◽  
Allison J. Doupe

Birdsong learning provides an ideal model system for studying temporally complex motor behavior. Guided by the well-characterized functional anatomy of the song system, we have constructed a computational model of the sensorimotor phase of song learning. Our model uses simple Hebbian and reinforcement learning rules and demonstrates the plausibility of a detailed set of hypotheses concerning sensory-motor interactions during song learning. The model focuses on the motor nuclei HVc and robust nucleus of the archistriatum (RA) of zebra finches and incorporates the long-standing hypothesis that a series of song nuclei, the Anterior Forebrain Pathway (AFP), plays an important role in comparing the bird's own vocalizations with a previously memorized song, or “template.” This “AFP comparison hypothesis” is challenged by the significant delay that would be experienced by presumptive auditory feedback signals processed in the AFP. We propose that the AFP does not directly evaluate auditory feedback, but instead, receives an internally generated prediction of the feedback signal corresponding to each vocal gesture, or song “syllable.” This prediction, or “efference copy,” is learned in HVc by associating premotor activity in RA-projecting HVc neurons with the resulting auditory feedback registered within AFP-projecting HVc neurons. We also demonstrate how negative feedback “adaptation” can be used to separate sensory and motor signals within HVc. The model predicts that motor signals recorded in the AFP during singing carry sensory information and that the primary role for auditory feedback during song learning is to maintain an accurate efference copy. The simplicity of the model suggests that associational efference copy learning may be a common strategy for overcoming feedback delay during sensorimotor learning.


2021 ◽  
Author(s):  
Katarzyna Ciesla ◽  
T. Wolak ◽  
A. Lorens ◽  
H. Skarżyński ◽  
A. Amedi

Abstract Understanding speech in background noise is challenging. Wearing face-masks during COVID19-pandemics made it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on fingertips. After a short training session, participants significantly improved (16 out of 17) in speech-in-noise understanding, when added vibrations corresponded to low-frequencies extracted from the sentence. The level of understanding was maintained after training, when the loudness of the background noise doubled (mean group improvement of ~ 10 decibels). This result indicates that our solution can be very useful for the hearing-impaired patients. Even more interestingly, the improvement was transferred to a post-training situation when the touch input was removed, showing that we can apply the setup for auditory rehabilitation in cochlear implant-users. Future wearable implementations of our SSD can also be used in real-life situations, when talking on the phone or learning a foreign language. We discuss the basic science implications of our findings, such as we show that even in adulthood a new pairing can be established between a neuronal computation (speech processing) and an atypical sensory modality (tactile). Speech is indeed a multisensory signal, but learned from birth in an audio-visual context. Interestingly, adding lip reading cues to speech in noise provides benefit of the same or lower magnitude as we report here for adding touch.


Author(s):  
Michael J. Proulx ◽  
David J. Brown ◽  
Achille Pasqualotto

Vision is the default sensory modality for normal spatial navigation in humans. Touch is restricted to providing information about peripersonal space, whereas detecting and avoiding obstacles in extrapersonal space is key for efficient navigation. Hearing is restricted to the detection of objects that emit noise, yet many obstacles such as walls are silent. Sensory substitution devices provide a means of translating distal visual information into a form that visually impaired individuals can process through either touch or hearing. Here we will review findings from various sensory substitution systems for the processing of visual information that can be classified as what (object recognition), where (localization), and how (perception for action) processing. Different forms of sensory substitution excel at some tasks more than others. Spatial navigation brings together these different forms of information and provides a useful model for comparing sensory substitution systems, with important implications for rehabilitation, neuroanatomy, and theories of cognition.


1967 ◽  
Vol 10 (4) ◽  
pp. 865-875 ◽  
Author(s):  
Raymond S. Karlovich ◽  
James T. Graham

Twenty female subjects tapped on a tapping key to programmed visual pacing stimuli under synchronous auditory feedback, delayed auditory feedback, and decreased sensory feedback conditions and also to programmed auditory pacing stimuli under synchronous visual feedback, delayed visual feedback, and decreased sensory feedback conditions. Cross-modality matching procedures were employed to equate the perceptual magnitudes of the auditory and visual stimuli. Pattern duration and tapping key displacement variables were evaluated and it was noted that the relative perceptual magnitudes between pacing and feedback stimuli are important aspects determining the degree of alteration in keytapping motor performance under delayed sensory feedback. The data also indicated that increases in the intensity of tapping observed under delayed sensory feedback conditions were not due to the temporal distortion of the feedback but possibly to an absence of feedback at the moment of tapping.


2020 ◽  
pp. 1-26
Author(s):  
Louise P. Kirsch ◽  
Xavier Job ◽  
Malika Auvray

Abstract Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of ‘artificial synaesthesia’. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the ‘artificial synaesthesia’ view of sensory substitution should be rejected.


Sign in / Sign up

Export Citation Format

Share Document