The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals

Perception ◽  
2017 ◽  
Vol 46 (12) ◽  
pp. 1356-1370 ◽  
Author(s):  
Stefano Targher ◽  
Rocco Micciolo ◽  
Valeria Occelli ◽  
Massimiliano Zampini

Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

2012 ◽  
Vol 25 (0) ◽  
pp. 175
Author(s):  
Stefano Targher ◽  
Valeria Occelli ◽  
Massimiliano Zampini

Our recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual pairs are presented simultaneously. The present study purports to investigate possible temporal aspects of the audiovisual enhancement effect that we have previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) either presented in isolation or together with an auditory stimulus at different SOAs. In the first experiment, when the sound was always leading the visual stimuli, there was a significant visual detection enhancement even when the visual stimulus was temporally delayed by 400 ms. However, the visual detection improvement was reduced in the second experiment when the sound could randomly lead or lag the visual stimulus. A significant enhancement was found only when the audiovisual stimuli were synchronized. Taken together, the results of the present study seem to suggest that high-level associations between modalities might modulate audiovisual interactions in low vision individuals.


2012 ◽  
Vol 25 (0) ◽  
pp. 24
Author(s):  
Roberto Cecere ◽  
Benjamin De Haas ◽  
Harriett Cullen ◽  
Jon Driver ◽  
Vincenzo Romei

There is converging evidence that the duration of an auditory event can affect the perceived duration of a co-occurring visual event. When a brief visual stimulus is accompanied by a longer auditory stimulus, the perceived visual duration stretches. If this reflects a genuine sustain of visual stimulus perception, it should result in enhanced perception of non-temporal visual stimulus qualities. To test this hypothesis, in a temporal two-alternative forced choice task, 28 participants were asked to indicate whether a short (∼24 ms), peri-threshold, visual stimulus was presented in the first or in the second of two consecutive displays. Each display was accompanied by a sound of equal or longer duration (36, 48, 60, 72, 84, 96, 190 ms) than the visual stimulus. As a control condition, visual stimuli of different durations (matching auditory stimulus durations) were presented alone. We predicted that visual detection can improve as a function of sound duration. Moreover, if the expected cross-modal effect reflects sustained visual perception it should positively correlate with the improvement observed for genuinely longer visual stimuli. Results showed that detection sensitivity (d′) for the 24 ms visual stimulus was significantly enhanced when paired with longer auditory stimuli ranging from 60 to 96 ms duration. The visual detection performance dropped to baseline levels with 190 ms sounds. Crucially, the enhancement for auditory durations 60–96 ms significantly correlates with the d′ enhancement for visual stimuli lasting 60–96 ms in the control condition. We conclude that the duration of co-occurring auditory stimuli not only influences the perceived duration of visual stimuli but reflects a genuine sustain in visual perception.


2020 ◽  
Vol 82 (7) ◽  
pp. 3490-3506
Author(s):  
Jonathan Tong ◽  
Lux Li ◽  
Patrick Bruns ◽  
Brigitte Röder

Abstract According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.


2021 ◽  
Author(s):  
Niall Gavin ◽  
David McGovern ◽  
Rebecca Hirst

The sound-induced flash illusion occurs when a rapidly presented visual stimulus is accompanied by two auditory stimuli, creating the illusory percept of two visual stimuli. While much research has focused on how the temporal proximity of the audiovisual stimuli impacts susceptibility to the illusion, comparatively less research has been dedicated to investigating the impact of spatial manipulations. Here, we aimed to assess whether manipulating the eccentricity of visual flash stimuli altered the properties of the temporal binding window associated with the SIFI. Twenty participants were required to report whether they perceived one or two flashes that were concurrently presented with one or two beeps. Visual stimuli were presented at one of four different retinal eccentricities (2.5, 5, 7.5 or 10 degrees below fixation) and audiovisual stimuli were separated by one of eight stimulus-onset asynchronies. In keeping with previous findings, increasing stimulus-onset asynchrony between the auditory and visual stimuli led to a marked decrease in susceptibility to the illusion allowing us to estimate the width and amplitude of the temporal binding window. However, varying the eccentricity of the visual stimulus had no effect on either the width or the peak amplitude of the temporal binding window, with a similar pattern of results observed for both the “fission” and “fusion” variants of the illusion. Thus, spatial manipulations of the audiovisual stimuli used to elicit the SIFI appear to have a weaker effect on the integration of sensory signals than temporal manipulations, a finding which has implications for neuroanatomical models of multisensory integration.


Author(s):  
Pavlo Bazilinskyy ◽  
Joost de Winter

Objective: This study was designed to replicate past research concerning reaction times to audiovisual stimuli with different stimulus onset asynchrony (SOA) using a large sample of crowdsourcing respondents. Background: Research has shown that reaction times are fastest when an auditory and a visual stimulus are presented simultaneously and that SOA causes an increase in reaction time, this increase being dependent on stimulus intensity. Research on audiovisual SOA has been conducted with small numbers of participants. Method: Participants ( N = 1,823) each performed 176 reaction time trials consisting of 29 SOA levels and three visual intensity levels, using CrowdFlower, with a compensation of US$0.20 per participant. Results were verified with a local Web-in-lab study ( N = 34). Results: The results replicated past research, with a V shape of mean reaction time as a function of SOA, the V shape being stronger for lower-intensity visual stimuli. The level of SOA affected mainly the right side of the reaction time distribution, whereas the fastest 5% was hardly affected. The variability of reaction times was higher for the crowdsourcing study than for the Web-in-lab study. Conclusion: Crowdsourcing is a promising medium for reaction time research that involves small temporal differences in stimulus presentation. The observed effects of SOA can be explained by an independent-channels mechanism and also by some participants not perceiving the auditory or visual stimulus, hardware variability, misinterpretation of the task instructions, or lapses in attention. Application: The obtained knowledge on the distribution of reaction times may benefit the design of warning systems.


1984 ◽  
Vol 59 (1) ◽  
pp. 212-214
Author(s):  
H. W. Craver

The reliability of an attention-focusing technique was assessed for 12 subjects over 4 sessions. Subjects' thought intrusions were counted while they were focusing on either visual or auditory stimuli. Digital temperatures were recorded and an experimental-situation questionnaire was administered. This technique provides extremely reliable self-reports across the sessions. The total number of intrusions was higher for the auditory stimulus than for the visual stimulus. The study's relevance to assessing self-monitoring techniques such as meditation is discussed.


2021 ◽  
pp. 1-12
Author(s):  
Anna Borgolte ◽  
Ahmad Bransi ◽  
Johanna Seifert ◽  
Sermin Toto ◽  
Gregor R. Szycik ◽  
...  

Abstract Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.


2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


1976 ◽  
Vol 43 (2) ◽  
pp. 487-493 ◽  
Author(s):  
Robert I. Bermant ◽  
Robert B. Welch

Subjects were exposed to a visual and to an auditory stimulus that differed spatially in laterality of origin. The subjects were observed for visual biasing of auditory localization (the momentary influence of a light on the spatially perceived location of a simultaneously presented sound) and for auditory aftereffect (a change in perceived location of a sound that persists over time and is measured after termination of the visual stimulus). A significant effect of visual stimulation on auditory localization was found only with the measure of bias. Bias was tested as a function of degree of visual-auditory separation (10/20/30°), eye position (straight-ahead/visual stimulus fixation), and position of visual stimulus relative to auditory stimulus (left/right). Only eye position proved statistically significant; straight-ahead eye position induced more bias than did fixation of the visual stimulus.


Sign in / Sign up

Export Citation Format

Share Document