scholarly journals Hemisphere-specific properties of the ventriloquism aftereffect

2019 ◽  
Vol 146 (2) ◽  
pp. EL177-EL183
Author(s):  
Norbert Kopčo ◽  
Peter Lokša ◽  
I-fan Lin ◽  
Jennifer Groh ◽  
Barbara Shinn-Cunningham
Author(s):  
Jean Vroomen ◽  
Paul Bertelson ◽  
Ilja Frissen ◽  
Beatrice De Gelder

2018 ◽  
Vol 29 (6) ◽  
pp. 926-935 ◽  
Author(s):  
Christopher C. Berger ◽  
H. Henrik Ehrsson

Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect—a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli—is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.


2013 ◽  
Vol 2013 ◽  
pp. 1-17 ◽  
Author(s):  
Elisa Magosso ◽  
Filippo Cona ◽  
Mauro Ursino

Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.


2021 ◽  
Author(s):  
Peter Loksa ◽  
Norbert Kopco

Background: Ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audiovisual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered, HC, vs. eye-centered, EC). Experimental studies examining the RF of VAE found inconsistent results: a mixture of HC and EC RFs was observed for VAE induced in the central region, while a predominantly HC RF was observed in the periphery. Here, a computational model examines these inconsistencies, as well as a newly observed EC adaptation induced by AV-aligned audiovisual stimuli. Methods: The model has two versions, each containing two additively combined components: a saccade-related component characterizing the adaptation in auditory-saccade responses, and auditory space representation adapted by ventriloquism signals either in the HC RF (HC version) or in a combination of HC and EC RFs (HEC version). Results: The HEC model performed better than the HC model in the main simulation considering all the data, while the HC model was more appropriate when only the AV-aligned adaptation data were simulated. Conclusion: Visual signals in a uniform mixed HC+EC RF are likely used to calibrate the auditory spatial representation, even after the EC-referenced auditory-saccade adaptation is accounted for.


2021 ◽  
Author(s):  
Hame Park ◽  
Christoph Kayser

Whether two sensory cues interact during perceptual judgments depends on their immediate properties, but as suggested by Bayesian models, also on the observer's a priori belief that these originate from a common source. While in many experiments this a priori belief is considered fixed, in real life it must adapt to the momentary context or environment. To understand the adaptive nature of human multisensory perception we investigated the context-sensitivity of spatial judgements in a ventriloquism paradigm. We exposed observers to audio-visual stimuli whose discrepancy either varied over a wider (±46°) or a narrower range (±26°) and hypothesized that exposure to a wider range of discrepancies would facilitate multisensory binding by increasing participants a priori belief about a common source for a given discrepancy. Our data support this hypothesis by revealing an enhanced integration (ventriloquism) bias in the wider context, which was echoed in Bayesian causal inference models fit to participants' data, which assigned a stronger a priori integration tendency during the wider context. Interestingly, the immediate ventriloquism aftereffect, a multisensory response bias obtained following a multisensory test trial, was not affected by the contextual manipulation, although participants' confidence in their spatial judgments differed between contexts for both integration and recalibration trials. These results highlight the context-sensitivity of multisensory binding and suggest that the immediate ventriloquism aftereffect is not a purely sensory-level consequence of the multisensory integration process.


NeuroImage ◽  
2017 ◽  
Vol 162 ◽  
pp. 257-268 ◽  
Author(s):  
Björn Zierul ◽  
Brigitte Röder ◽  
Claus Tempelmann ◽  
Patrick Bruns ◽  
Toemme Noesselt

2007 ◽  
Vol 19 (12) ◽  
pp. 3335-3355 ◽  
Author(s):  
Yoshiyuki Sato ◽  
Taro Toyoizumi ◽  
Kazuyuki Aihara

We study a computational model of audiovisual integration by setting a Bayesian observer that localizes visual and auditory stimuli without presuming the binding of audiovisual information. The observer adopts the maximum a posteriori approach to estimate the physically delivered position or timing of presented stimuli, simultaneously judging whether they are from the same source or not. Several experimental results on the perception of spatial unity and the ventriloquism effect can be explained comprehensively if the subjects in the experiments are regarded as Bayesian observers who try to accurately locate the stimulus. Moreover, by adaptively changing the inner representation of the Bayesian observer in terms of experience, we show that our model reproduces perceived spatial frame shifts due to the audiovisual adaptation known as the ventriloquism aftereffect.


2016 ◽  
Vol 235 (2) ◽  
pp. 585-595 ◽  
Author(s):  
Adam K. Bosen ◽  
Justin T. Fleming ◽  
Paul D. Allen ◽  
William E. O‘Neill ◽  
Gary D. Paige

Sign in / Sign up

Export Citation Format

Share Document