scholarly journals Temporal expectation weights visual signals over auditory signals

2016 ◽  
Vol 24 (2) ◽  
pp. 416-422 ◽  
Author(s):  
Melisa Menceloglu ◽  
Marcia Grabowecky ◽  
Satoru Suzuki
2016 ◽  
Vol 16 (12) ◽  
pp. 149
Author(s):  
Melisa Menceloglu ◽  
Marcia Grabowecky ◽  
Satoru Suzuki

1973 ◽  
Vol 25 (2) ◽  
pp. 201-206 ◽  
Author(s):  
A. F. Sanders ◽  
A. H. Wertheim

Seven subjects were used in an experiment on the relation between signal modality and the effect of foreperiod duration (EP) on RT. With visual signals the usually reported systematic increase of RT as a function of FP duration (1, 5 and 15 s) was confirmed; with auditory signals no difference was found between FP's of 1 and 5 s while the effect at 15 s was equivalent to that found at 5 s with the visual signal. The results suggest that besides factors such as time uncertainty the FP effect is also largely dependent on the arousing quality of the signal.


2021 ◽  
Author(s):  
Mate Aller ◽  
Heidi Solberg Okland ◽  
Lucy J MacGregor ◽  
Helen Blank ◽  
Matthew H. Davis

Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase locking to auditory and visual signals in MEG recordings from 14 human participants (6 female) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared to audio-only (AO) speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus (STG) did not show above-chance partial coherence with visual speech signals during AV conditions, but did show partial coherence in VO conditions. Hence, visual speech enabled stronger phase locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.


2010 ◽  
Vol 278 (1705) ◽  
pp. 535-538 ◽  
Author(s):  
Derek H. Arnold ◽  
Kielan Yarrow

Our sense of relative timing is malleable. For instance, visual signals can be made to seem synchronous with earlier sounds following prolonged exposure to an environment wherein auditory signals precede visual ones. Similarly, actions can be made to seem to precede their own consequences if an artificial delay is imposed for a period, and then removed. Here, we show that our sense of relative timing for combinations of visual changes is similarly pliant. We find that direction reversals can be made to seem synchronous with unusually early colour changes after prolonged exposure to a stimulus wherein colour changes precede direction changes. The opposite effect is induced by prolonged exposure to colour changes that lag direction changes. Our data are consistent with the proposal that our sense of timing for changes encoded by distinct sensory mechanisms can adjust, at least to some degree, to the prevailing environment. Moreover, they reveal that visual analyses of colour and motion are sufficiently independent for this to occur.


Author(s):  
Valeria C Caruso ◽  
Daniel S Pages ◽  
Marc A. Sommer ◽  
Jennifer M Groh

Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually-guided saccades from variable initial fixation locations, and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become predominantly eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.


2011 ◽  
Vol 57 (2) ◽  
pp. 197-207 ◽  
Author(s):  
Emma C. Siddall ◽  
Nicola M. Marples

Abstract Many aposematic insect species advertise their toxicity to potential predators using olfactory and auditory signals, in addition to visual signals, to produce a multimodal warning display. The olfactory signals in these displays may have interesting effects, such as eliciting innate avoidance against novel colored prey, or improving learning and memory of defended prey. However, little is known about the effects of such ancillary signals when they are auditory rather than olfactory. The few studies that have investigated this question have provided conflicting results. The current study sought to clarify and extend understanding of the effects of prey auditory signals on avian predator responses. The domestic chick Gallus gallus domesticus was used as a model avian predator to examine how the defensive buzzing sound of a bumblebee Bombus terrestris affected the chick’s innate avoidance behavior, and the learning and memory of prey avoidance. The results demonstrate that the buzzing sound had no effect on the predator’s responses to unpalatable aposematically colored crumbs, suggesting that the agitated buzzing of B. terrestris may provide no additional protection from avian predators.


2019 ◽  
Author(s):  
Valeria C. Caruso ◽  
Daniel S. Pages ◽  
Marc A. Sommer ◽  
Jennifer M. Groh

ABSTRACTStimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually-guided saccades from variable initial fixation locations, and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become predominantly eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.New and NoteworthyModels for visual-auditory integration posit that visual signals are eye-centered throughout the brain, while auditory signals are converted from head-centered to eye-centered coordinates. We show instead that both modalities largely employ hybrid reference frames: neither fully head-nor eye-centered. Across three hubs of the oculomotor network (Intraparietal Cortex, Frontal Eye Field and Superior Colliculus) visual and auditory signals evolve from hybrid to a common eye-centered format via different dynamics across brain areas and time.


1987 ◽  
Vol 57 (1) ◽  
pp. 35-55 ◽  
Author(s):  
M. F. Jay ◽  
D. L. Sparks

Based on the findings of the preceding paper, it is known that auditory and visual signals have been translated into common coordinates at the level of the superior colliculus (SC) and share a motor circuit involved in the generation of saccadic eye movements. It is not known, however, whether the translation of sensory signals into motor coordinates occurs prior to or within the SC. Nor is it known in what coordinates auditory signals observed in the SC are encoded. The present experiment tested two alternative hypotheses concerning the frame of reference of auditory signals found in the deeper layers of the SC. The hypothesis that auditory signals are encoded in head coordinates predicts that, with the head stationary, the response of auditory neurons will not be affected by variations in eye position but will be determined by the location of the sound source. The hypothesis that auditory responses encode the trajectory of the eye movement required to look to the target (motor error) predicts that the response of auditory cells will depend on both the position of the sound source and the position of the eyes in the orbit. Extracellular single-unit recordings were obtained from neurons in the SC while monkeys made delayed saccades to auditory or visual targets in a darkened room. The coordinates of auditory signals were studied by plotting auditory receptive fields while the animal fixated one of three targets placed 24 degrees apart along the horizontal plane. For 99 of 121 SC cells, the spatial location of the auditory receptive field was significantly altered by the position of the eyes in the orbit. In contrast, the responses of five sound-sensitive cells isolated in the inferior colliculus were not affected by variations in eye position. The possibility that systematic variations in the position of the pinnae associated with different fixation positions could account for these findings was controlled for by plotting auditory receptive fields while the pinnae were mechanically restrained. Under these conditions, the position of the eyes in the orbit still had a significant effect on the responsiveness of collicular neurons to auditory stimuli. The average magnitude of the shift of the auditory receptive field with changes in eye position (12.9 degrees) did not correspond to the magnitude of the shift in eye position (24 degrees). Alternative explanations for this finding were considered. One possibility is that, within the SC, there is a gradual transition from auditory signals in head coordinates to signals in motor error coordinates.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document