scholarly journals Temporal recalibration of vision

2010 ◽  
Vol 278 (1705) ◽  
pp. 535-538 ◽  
Author(s):  
Derek H. Arnold ◽  
Kielan Yarrow

Our sense of relative timing is malleable. For instance, visual signals can be made to seem synchronous with earlier sounds following prolonged exposure to an environment wherein auditory signals precede visual ones. Similarly, actions can be made to seem to precede their own consequences if an artificial delay is imposed for a period, and then removed. Here, we show that our sense of relative timing for combinations of visual changes is similarly pliant. We find that direction reversals can be made to seem synchronous with unusually early colour changes after prolonged exposure to a stimulus wherein colour changes precede direction changes. The opposite effect is induced by prolonged exposure to colour changes that lag direction changes. Our data are consistent with the proposal that our sense of timing for changes encoded by distinct sensory mechanisms can adjust, at least to some degree, to the prevailing environment. Moreover, they reveal that visual analyses of colour and motion are sufficiently independent for this to occur.

2021 ◽  
Author(s):  
Kyuto Uno ◽  
Kazuhiko Yokosawa

Audiovisual temporal recalibration refers to a shift in the point of subjective simultaneity (PSS) between audio and visual signals triggered by prolonged exposure to asynchronies between these signals. Previous research indicated that the spatial proximity of audiovisual signals can be a determinant of which pairs of signals are temporally recalibrated when multiple events compete for recalibration. Here we show that temporal recalibration is modulated by an observer’s assumption that the audiovisual signals originate from the same unitary event (“unity assumption”). Participants were shown alternating face photos and voices of the male and female speakers. These stimuli were presented equally spaced in time, and the voices were presented monaurally through headphones, such that no spatiotemporal-based grouping was implied for these stimuli. There were two conditions for the stimulus sequence in the adaptation phase: one in which a face photo always preceded its corresponding voice within each pairing of audiovisual stimuli (i.e., multiple repetitions of the sequence: female voice – male face – male voice – female voice), and the other one in which the corresponding voice always preceded its face photo. We found a shift in the PSS between these audiovisual signals towards the temporal order for the same gender person. The results show that the unity assumption between face photos and voices affects temporal recalibration, indicating the possibility that the brain selectively recalibrates the asynchronies of audiovisual signals that are considered to originate from the same unitary event in a cluttered environment.


2016 ◽  
Vol 24 (2) ◽  
pp. 416-422 ◽  
Author(s):  
Melisa Menceloglu ◽  
Marcia Grabowecky ◽  
Satoru Suzuki

1973 ◽  
Vol 25 (2) ◽  
pp. 201-206 ◽  
Author(s):  
A. F. Sanders ◽  
A. H. Wertheim

Seven subjects were used in an experiment on the relation between signal modality and the effect of foreperiod duration (EP) on RT. With visual signals the usually reported systematic increase of RT as a function of FP duration (1, 5 and 15 s) was confirmed; with auditory signals no difference was found between FP's of 1 and 5 s while the effect at 15 s was equivalent to that found at 5 s with the visual signal. The results suggest that besides factors such as time uncertainty the FP effect is also largely dependent on the arousing quality of the signal.


2016 ◽  
Vol 16 (12) ◽  
pp. 149
Author(s):  
Melisa Menceloglu ◽  
Marcia Grabowecky ◽  
Satoru Suzuki

2021 ◽  
Author(s):  
Mate Aller ◽  
Heidi Solberg Okland ◽  
Lucy J MacGregor ◽  
Helen Blank ◽  
Matthew H. Davis

Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase locking to auditory and visual signals in MEG recordings from 14 human participants (6 female) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared to audio-only (AO) speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus (STG) did not show above-chance partial coherence with visual speech signals during AV conditions, but did show partial coherence in VO conditions. Hence, visual speech enabled stronger phase locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.


Author(s):  
Valeria C Caruso ◽  
Daniel S Pages ◽  
Marc A. Sommer ◽  
Jennifer M Groh

Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually-guided saccades from variable initial fixation locations, and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become predominantly eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.


2015 ◽  
Vol 28 (3-4) ◽  
pp. 351-370 ◽  
Author(s):  
Hao Tam Ho ◽  
Hao Tam Ho ◽  
Emily Orchard-Mills ◽  
Hao Tam Ho ◽  
Emily Orchard-Mills ◽  
...  

Following prolonged exposure to audiovisual asynchrony, an observer’s point of subjective simultaneity (PSS) shifts in the direction of the leading modality. It has been debated whether other sensory pairings, such as vision and touch, lead to a similar temporal recalibration, and if so, whether the internal timing mechanism underlying lag visuotactile adaptation is centralised or distributed. To address these questions, we adapted observers to vision- and tactile-leading visuotactile asynchrony on either their left or right hand side in different blocks. In one test condition, participants performed a simultaneity judgment on the adapted side (unilateral) and in another they performed a simultaneity judgment on the non-adapted side (contralateral). In a third condition, participants adapted concurrently to equal and opposite asynchronies on each side and were tested randomly on either hand (bilateral opposed). Results from the first two conditions show that observers recalibrate to visuotactile asynchronies, and that the recalibration transfers to the non-adapted side. These findings suggest a centralised recalibration mechanism not linked to the adapted side and predict no recalibration for the bilateral opposed condition, assuming the adapted effects were equal on each side. This was confirmed in the group of participants that adapted to vision- and tactile-leading asynchrony on the right and left hand side, respectively. However, the other group (vision-leading on the left and tactile-leading on the right) did show a recalibration effect, suggesting a distributed mechanism. We discuss these findings in terms of a hybrid model that assumes the co-existence of a centralised and distributed timing mechanism.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Therese Lennert ◽  
Soheila Samiee ◽  
Sylvain Baillet

AbstractThe brain naturally resolves the challenge of integrating auditory and visual signals produced by the same event despite different physical propagation speeds and neural processing latencies. Temporal recalibration manifests in human perception to realign incoming signals across the senses. Recent behavioral studies show it is a fast-acting phenomenon, relying on the most recent exposure to audiovisual asynchrony. Here we show that the physiological mechanism of rapid, context-dependent recalibration builds on interdependent pre-stimulus cortical rhythms in sensory brain regions. Using magnetoencephalography, we demonstrate that individual recalibration behavior is related to subject-specific properties of fast oscillations (>35 Hz) nested within a slower alpha rhythm (8–12 Hz) in auditory cortex. We also show that the asynchrony of a previously presented audiovisual stimulus pair alters the preferred coupling phase of these fast oscillations along the alpha cycle, with a resulting phase-shift amounting to the temporal recalibration observed behaviorally. These findings suggest that cross-frequency coupled oscillations contribute to forming unified percepts across senses.


2011 ◽  
Vol 57 (2) ◽  
pp. 197-207 ◽  
Author(s):  
Emma C. Siddall ◽  
Nicola M. Marples

Abstract Many aposematic insect species advertise their toxicity to potential predators using olfactory and auditory signals, in addition to visual signals, to produce a multimodal warning display. The olfactory signals in these displays may have interesting effects, such as eliciting innate avoidance against novel colored prey, or improving learning and memory of defended prey. However, little is known about the effects of such ancillary signals when they are auditory rather than olfactory. The few studies that have investigated this question have provided conflicting results. The current study sought to clarify and extend understanding of the effects of prey auditory signals on avian predator responses. The domestic chick Gallus gallus domesticus was used as a model avian predator to examine how the defensive buzzing sound of a bumblebee Bombus terrestris affected the chick’s innate avoidance behavior, and the learning and memory of prey avoidance. The results demonstrate that the buzzing sound had no effect on the predator’s responses to unpalatable aposematically colored crumbs, suggesting that the agitated buzzing of B. terrestris may provide no additional protection from avian predators.


Sign in / Sign up

Export Citation Format

Share Document