Spatial and Cross-Modal Attention Alter Responses to Unattended Sensory Information in Early Visual and Auditory Human Cortex

2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.

2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


2015 ◽  
Vol 3 (1-2) ◽  
pp. 88-101 ◽  
Author(s):  
Kathleen M. Einarson ◽  
Laurel J. Trainor

Recent work examined five-year-old children’s perceptual sensitivity to musical beat alignment. In this work, children watched pairs of videos of puppets drumming to music with simple or complex metre, where one puppet’s drumming sounds (and movements) were synchronized with the beat of the music and the other drummed with incorrect tempo or phase. The videos were used to maintain children’s interest in the task. Five-year-olds were better able to detect beat misalignments in simple than complex metre music. However, adults can perform poorly when attempting to detect misalignment of sound and movement in audiovisual tasks, so it is possible that the moving stimuli actually hindered children’s performance. Here we compared children’s sensitivity to beat misalignment in conditions with dynamic visual movement versus still (static) visual images. Eighty-four five-year-old children performed either the same task as described above or a task that employed identical auditory stimuli accompanied by a motionless picture of the puppet with the drum. There was a significant main effect of metre type, replicating the finding that five-year-olds are better able to detect beat misalignment in simple metre music. There was no main effect of visual condition. These results suggest that, given identical auditory information, children’s ability to judge beat misalignment in this task is not affected by the presence or absence of dynamic visual stimuli. We conclude that at five years of age, children can tell if drumming is aligned to the musical beat when the music has simple metric structure.


2012 ◽  
Vol 25 (0) ◽  
pp. 24
Author(s):  
Roberto Cecere ◽  
Benjamin De Haas ◽  
Harriett Cullen ◽  
Jon Driver ◽  
Vincenzo Romei

There is converging evidence that the duration of an auditory event can affect the perceived duration of a co-occurring visual event. When a brief visual stimulus is accompanied by a longer auditory stimulus, the perceived visual duration stretches. If this reflects a genuine sustain of visual stimulus perception, it should result in enhanced perception of non-temporal visual stimulus qualities. To test this hypothesis, in a temporal two-alternative forced choice task, 28 participants were asked to indicate whether a short (∼24 ms), peri-threshold, visual stimulus was presented in the first or in the second of two consecutive displays. Each display was accompanied by a sound of equal or longer duration (36, 48, 60, 72, 84, 96, 190 ms) than the visual stimulus. As a control condition, visual stimuli of different durations (matching auditory stimulus durations) were presented alone. We predicted that visual detection can improve as a function of sound duration. Moreover, if the expected cross-modal effect reflects sustained visual perception it should positively correlate with the improvement observed for genuinely longer visual stimuli. Results showed that detection sensitivity (d′) for the 24 ms visual stimulus was significantly enhanced when paired with longer auditory stimuli ranging from 60 to 96 ms duration. The visual detection performance dropped to baseline levels with 190 ms sounds. Crucially, the enhancement for auditory durations 60–96 ms significantly correlates with the d′ enhancement for visual stimuli lasting 60–96 ms in the control condition. We conclude that the duration of co-occurring auditory stimuli not only influences the perceived duration of visual stimuli but reflects a genuine sustain in visual perception.


2020 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

ABSTRACTThe brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer’s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory spatial discrimination task in which spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory spatial discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.


1984 ◽  
Vol 59 (1) ◽  
pp. 212-214
Author(s):  
H. W. Craver

The reliability of an attention-focusing technique was assessed for 12 subjects over 4 sessions. Subjects' thought intrusions were counted while they were focusing on either visual or auditory stimuli. Digital temperatures were recorded and an experimental-situation questionnaire was administered. This technique provides extremely reliable self-reports across the sessions. The total number of intrusions was higher for the auditory stimulus than for the visual stimulus. The study's relevance to assessing self-monitoring techniques such as meditation is discussed.


i-Perception ◽  
2018 ◽  
Vol 9 (6) ◽  
pp. 204166951881570
Author(s):  
Sachiyo Ueda ◽  
Ayane Mizuguchi ◽  
Reiko Yakushijin ◽  
Akira Ishiguchi

To overcome limitations in perceptual bandwidth, humans condense various features of the environment into summary statistics. Variance constitutes indices that represent diversity within categories and also the reliability of the information regarding that diversity. Studies have shown that humans can efficiently perceive variance for visual stimuli; however, to enhance perception of environments, information about the external world can be obtained from multisensory modalities and integrated. Consequently, this study investigates, through two experiments, whether the precision of variance perception improves when visual information (size) and corresponding auditory information (pitch) are integrated. In Experiment 1, we measured the correspondence between visual size and auditory pitch for each participant by using adjustment measurements. The results showed a linear relationship between size and pitch—that is, the higher the pitch, the smaller the corresponding circle. In Experiment 2, sequences of visual stimuli were presented both with and without linked auditory tones, and the precision of perceived variance in size was measured. We consequently found that synchronized presentation of audio and visual stimuli that have the same variance improves the precision of perceived variance in size when compared with visual-only presentation. This suggests that audiovisual information may be automatically integrated in variance perception.


2002 ◽  
Vol 55 (1b) ◽  
pp. 61-73 ◽  
Author(s):  
John M. Pearce ◽  
David N. George ◽  
Aydan Aydin

Rats received Pavlovian conditioning in which food was signalled by a visual stimulus, A+, an auditory stimulus, B+, and a compound composed of different visual and auditory stimuli, CD+. Test trials were then given with the compound AB. Experiments 1 and 2A revealed stronger responding during AB than during CD. In Experiment 2B, there was no evidence of a summation of responding during AB when A+ B+ training was conducted in the absence of CD+ trials. A further failure to observe abnormally strong responding during ABwas found in Experiment 3 for which the training trials with A+ B+ CD+ were accompanied by trials in which C and D were separately paired with food. The results are explained in terms of a configural theory of conditioning, which assumes that responding during a compound is determined by generalization from its components, as well as from other compounds to which it is similar.


2012 ◽  
Vol 25 (0) ◽  
pp. 169
Author(s):  
Tomoaki Nakamura ◽  
Yukio P. Gunji

The majority of research on audio–visual interaction focused on spatio-temporal factors and synesthesia-like phenomena. Especially, research on synesthesia-like phenomena has been advanced by Marks et al., and they found synesthesia-like correlation between brightness and size of visual stimuli and pitch of auditory stimuli (Marks, 1987). It seems that main interest of research on synesthesia-like phenomena is what perceptual similarity/difference between synesthetes and non-synesthetes is. We guessed that cross-modal phenomena of non-synesthetes on perceptual level emerge as a function to complement the absence or ambiguity of a certain stimulus. To verify the hypothesis, we investigated audio–visual interaction using movement (speed) of an object as visual stimuli and sine-waves as auditory stimuli. In this experiment objects (circles) moved at a fixed speed in one trial and the objects were masked in arbitrary positions, and auditory stimuli (high, middle, low pitch) were given simultaneously with the disappearance of objects. Subject reported the expected position of the objects when auditory stimuli stopped. Result showed that correlation between the position, i.e., the movement speed, of the object and pitch of sound was found. We conjecture that cross-modal phenomena on non-synesthetes tend to occur when one of sensory stimuli are absent/ambiguous.


2010 ◽  
Vol 22 (2) ◽  
pp. 347-361 ◽  
Author(s):  
David V. Smith ◽  
Ben Davis ◽  
Kathy Niu ◽  
Eric W. Healy ◽  
Leonardo Bonilha ◽  
...  

Neuroimaging studies suggest that a fronto-parietal network is activated when we expect visual information to appear at a specific spatial location. Here we examined whether a similar network is involved for auditory stimuli. We used sparse fMRI to infer brain activation while participants performed analogous visual and auditory tasks. On some trials, participants were asked to discriminate the elevation of a peripheral target. On other trials, participants made a nonspatial judgment. We contrasted trials where the participants expected a peripheral spatial target to those where they were cued to expect a central target. Crucially, our statistical analyses were based on trials where stimuli were anticipated but not presented, allowing us to directly infer perceptual orienting independent of perceptual processing. This is the first neuroimaging study to use an orthogonal-cuing paradigm (with cues predicting azimuth and responses involving elevation discrimination). This aspect of our paradigm is important, as behavioral cueing effects in audition are classically only observed when participants are asked to make spatial judgments. We observed similar fronto-parietal activation for both vision and audition. In a second experiment that controlled for stimulus properties and task difficulty, participants made spatial and temporal discriminations about musical instruments. We found that the pattern of brain activation for spatial selection of auditory stimuli was remarkably similar to what we found in our first experiment. Collectively, these results suggest that the neural mechanisms supporting spatial attention are largely similar across both visual and auditory modalities.


2021 ◽  
Author(s):  
Kimberly Reinhold ◽  
Arbora Resulaj ◽  
Massimo Scanziani

The behavioral state of a mammal impacts how the brain responds to visual stimuli as early as in the dorsolateral geniculate nucleus of the thalamus (dLGN), the primary relay of visual information to the cortex. A clear example of this is the markedly stronger response of dLGN neurons to higher temporal frequencies of the visual stimulus in alert as compared to quiescent animals. The dLGN receives strong feedback from the visual cortex, yet whether this feedback contributes to these state-dependent responses to visual stimuli is poorly understood. Here we show that in mice, silencing cortico-thalamic feedback abolishes state-dependent differences in the response of dLGN neurons to visual stimuli. This holds true for dLGN responses to both temporal and spatial features of the visual stimulus. These results reveal that the state-dependent shift of the response to visual stimuli in an early stage of visual processing depends on cortico-thalamic feedback.


Sign in / Sign up

Export Citation Format

Share Document