scholarly journals Cross-Modal Interactions between Audition, Touch, and Vision in Endogenous Spatial Attention: ERP Evidence on Preparatory States and Sensory Modulations

2002 ◽  
Vol 14 (2) ◽  
pp. 254-271 ◽  
Author(s):  
Martin Eimer ◽  
José van Velzen ◽  
Jon Driver

Recent behavioral and event-related brain potential (ERP) studies have revealed cross-modal interactions in endogenous spatial attention between vision and audition, plus vision and touch. The present ERP study investigated whether these interactions reflect supramodal attentional control mechanisms, and whether similar cross-modal interactions also exist between audition and touch. Participants directed attention to the side indicated by a cue to detect infrequent auditory or tactile targets at the cued side. The relevant modality (audition or touch) was blocked. Attentional control processes were reflected in systematic ERP modulations elicited during cued shifts of attention. An anterior negativity contralateral to the cued side was followed by a contralateral positivity at posterior sites. These effects were similar whether the cue signaled which side was relevant for audition or for touch. They also resembled previously observed ERP modulations for shifts of visual attention, thus implicating supramodal mechanisms in the control of spatial attention. Following each cue, single auditory, tactile, or visual stimuli were presented at the cued or uncued side. Although stimuli in task-irrelevant modalities could be completely ignored, visual and auditory ERPs were nevertheless affected by spatial attention when touch was relevant, revealing cross-modal interactions. When audition was relevant, visual ERPs, but not tactile ERPs, were affected by spatial attention, indicating that touch can be decoupled from cross-modal attention when task-irrelevant.

2004 ◽  
Vol 16 (2) ◽  
pp. 272-288 ◽  
Author(s):  
Martin Eimer ◽  
José van Velzen ◽  
Jon Driver

Previous ERP studies have uncovered cross-modal interactions in endogenous spatial attention. Directing attention to one side to judge stimuli from one particular modality can modulate early modality-specific ERP components not only for that modality, but also for other currently irrelevant modalities. However, past studies could not determine whether the spatial focus of attention in the task-irrelevant secondary modality was similar to the primary modality, or was instead diffuse across one hemifield. Here, auditory or visual stimuli could appear at any one of four locations (two on each side). In different blocks, subjects judged stimuli at only one of these four locations, for an auditory (Experiment 1) or visual (Experiment 2) task. Early attentional modulations of visual and auditory ERPs were found for stimuli at the currently relevant location, compared with those at the irrelevant location within the same hemifield, thus demonstrating within-hemifield tuning of spatial attention. Crucially, this was found not only for the currently relevant modality, but also for the currently irrelevant modality. Moreover, these within-hemifield attention effects were statistically equivalent regardless of the task relevance of the modality, for both the auditory and visual ERP data. These results demonstrate that within-hemifield spatial attention for one task-relevant modality can transfer cross-modally to a task-irrelevant modality, consistent with spatial selection at a multimodal level of representation.


2009 ◽  
Vol 21 (12) ◽  
pp. 2384-2397 ◽  
Author(s):  
Valerio Santangelo ◽  
Marta Olivetti Belardinelli ◽  
Charles Spence ◽  
Emiliano Macaluso

In everyday life, the allocation of spatial attention typically entails the interplay between voluntary (endogenous) and stimulus-driven (exogenous) attention. Furthermore, stimuli in different sensory modalities can jointly influence the direction of spatial attention, due to the existence of cross-sensory links in attentional control. Using fMRI, we examined the physiological basis of these interactions. We induced exogenous shifts of auditory spatial attention while participants engaged in an endogenous visuospatial cueing task. Participants discriminated visual targets in the left or right hemifield. A central visual cue preceded the visual targets, predicting the target location on 75% of the trials (endogenous visual attention). In the interval between the endogenous cue and the visual target, task-irrelevant nonpredictive auditory stimuli were briefly presented either in the left or right hemifield (exogenous auditory attention). Consistent with previous unisensory visual studies, activation of the ventral fronto-parietal attentional network was observed when the visual targets were presented at the uncued side (endogenous invalid trials, requiring visuospatial reorienting), as compared with validly cued targets. Critically, we found that the side of the task-irrelevant auditory stimulus modulated these activations, reducing spatial reorienting effects when the auditory stimulus was presented on the same side as the upcoming (invalid) visual target. These results demonstrate that multisensory mechanisms of attentional control can integrate endogenous and exogenous spatial information, jointly determining attentional orienting toward the most relevant spatial location.


2020 ◽  
Author(s):  
Nicole Hakim ◽  
Tobias Feldmann-Wüstefeld ◽  
Edward Awh ◽  
Edward K Vogel

AbstractVisual working memory (WM) must maintain relevant information, despite the constant influx of both relevant and irrelevant information. Attentional control mechanisms help determine which of this new information gets access to our capacity-limited WM system. Previous work has treated attentional control as a monolithic process–either distractors capture attention or they are suppressed. Here, we provide evidence that attentional capture may instead be broken down into at least two distinct sub-component processes: 1) spatial capture, which refers to when spatial attention shifts towards the location of irrelevant stimuli, and 2) item-based capture, which refers to when item-based WM representations of irrelevant stimuli are formed. To dissociate these two sub-component processes of attentional capture, we utilized a series of EEG components that track WM maintenance (contralateral delay activity), suppression (distractor positivity), item individuation (N2pc), and spatial attention (lateralized alpha power). We show that relevant interrupters trigger both spatial and item-based capture, which means that they undermine WM maintenance more. Irrelevant interrupters, however, only trigger spatial capture from which ongoing WM representations can recover more easily. This fractionation of attentional capture into distinct sub-component processes provides a framework by which the fate of ongoing WM processes after interruption can be explained.


2012 ◽  
Vol 24 (7) ◽  
pp. 1596-1609 ◽  
Author(s):  
Tobias Katus ◽  
Søren K. Andersen ◽  
Matthias M. Müller

The focus of attention can be flexibly altered in mnemonic representations of past sensory events. We investigated the neural mechanisms of selection in tactile STM by applying vibrotactile sample stimuli of different intensities to both hands, followed by a symmetrically shaped visual retro-cue. The retro-cue indicated whether the weak or strong sample was relevant for subsequent comparison with a single tactile test stimulus. Locations of tactile stimuli were randomized, and the required response did not depend upon the spatial relation between cued sample and test stimulus. Selection between spatially segregated items in tactile STM was mirrored in lateralized activity following visual retro-cues (N2pc) and influenced encoding of task-irrelevant tactile probe stimuli (N140). Our findings support four major conclusions. First, retrospective selection results in transient shifts of spatial attention. Second, retrospective selection is functionally dissociable from attention-based rehearsal of locations. Third, selection mechanisms are linked across processing stages, as attention shifts in STM influence encoding of sensory signals. Fourth, selection in tactile STM recruits attentional control mechanisms that are, at least partially, supramodal.


2021 ◽  
Vol 7 (9) ◽  
pp. 191
Author(s):  
Nurit Gronau

Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background, are extracted rapidly. The extent to which such relational processing requires attentional capacity, however, has been heavily disputed over the years. In the present manuscript, I review studies investigating scene–object and object–object associative processing. I then present a series of studies in which I assessed the necessity of spatial attention to various types of visual–semantic relations within a scene. Importantly, in all studies, the spatial and temporal aspects of visual attention were tightly controlled in an attempt to minimize unintentional attention shifts from ‘attended’ to ‘unattended’ regions. Pairs of stimuli—either objects, scenes or a scene and an object—were briefly presented on each trial, while participants were asked to detect a pre-defined target category (e.g., an animal, a nonsense shape). Response times (RTs) to the target detection task were registered when visual attention spanned both stimuli in a pair vs. when attention was focused on only one of two stimuli. Among non-prioritized stimuli that were not defined as to-be-detected targets, findings consistently demonstrated rapid associative processing when stimuli were fully attended, i.e., shorter RTs to associated than unassociated pairs. Focusing attention on a single stimulus only, however, largely impaired this relational processing. Notably, prioritized targets continued to affect performance even when positioned at an unattended location, and their associative relations with the attended items were well processed and analyzed. Our findings portray an important dissociation between unattended task-irrelevant and task-relevant items: while the former require spatial attentional resources in order to be linked to stimuli positioned inside the attentional focus, the latter may influence high-level recognition and associative processes via feature-based attentional mechanisms that are largely independent of spatial attention.


2021 ◽  
Vol 35 (1) ◽  
pp. 15-22
Author(s):  
Kohei Fuseda ◽  
Jun’ichi Katayama

Abstract. Interest is a positive emotion related to attention. The event-related brain potential (ERP) probe technique is a useful method to evaluate the level of interest in dynamic stimuli. However, even in the irrelevant probe technique, the probe is presented as a physical stimulus and steals the observer’s attentional resources, although no overt response is required. Therefore, the probe might become a problematic distractor, preventing deep immersion of participants. Heartbeat-evoked brain potential (HEP) is a brain activity, time-locked to a cardiac event. No probe is required to obtain HEP data. Thus, we aimed to investigate whether the HEP can be used to evaluate the level of interest. Twenty-four participants (12 males and 12 females) watched attractive and unattractive individuals of the opposite sex in interesting and uninteresting videos (7 min each), respectively. We performed two techniques each for both the interesting and the uninteresting videos: the ERP probe and the HEP techniques. In the former, somatosensory stimuli were presented as task-irrelevant probes while participants watched videos: frequent (80%) and infrequent (20%) stimuli were presented at each wrist in random order. In the latter, participants watched videos without the probe. The P2 amplitude in response to the somatosensory probe was smaller and the positive wave amplitudes of HEP were larger while watching the videos of attractive individuals than while watching the videos of unattractive ones. These results indicate that the HEP technique is a useful method to evaluate the level of interest without an external probe stimulus.


2001 ◽  
Vol 15 (1) ◽  
pp. 22-34 ◽  
Author(s):  
D.H. de Koning ◽  
J.C. Woestenburg ◽  
M. Elton

Migraineurs with and without aura (MWAs and MWOAs) as well as controls were measured twice with an interval of 7 days. The first session of recordings and tests for migraineurs was held about 7 hours after a migraine attack. We hypothesized that electrophysiological changes in the posterior cerebral cortex related to visual spatial attention are influenced by the level of arousal in migraineurs with aura, and that this varies over the course of time. ERPs related to the active visual attention task manifested significant differences between controls and both types of migraine sufferers for the N200, suggesting a common pathophysiological mechanism for migraineurs. Furthermore, migraineurs without aura (MWOAs) showed a significant enhancement for the N200 at the second session, indicating the relevance of time of measurement within migraine studies. Finally, migraineurs with aura (MWAs) showed significantly enhanced P240 and P300 components at central and parietal cortical sites compared to MWOAs and controls, which seemed to be maintained over both sessions and could be indicative of increased noradrenergic activity in MWAs.


2019 ◽  
Author(s):  
Paola Perone ◽  
David Vaughn Becker ◽  
Joshua M. Tybur

Multiple studies report that disgust-eliciting stimuli are perceived as salient and subsequently capture selective attention. In the current study, we aimed to better understand the nature of temporal attentional biases toward disgust-eliciting stimuli and to investigate the extent to which these biases are sensitive to contextual and trait-level pathogen avoidance motives. Participants (N=116) performed in an Emotional Attentional Blink (EAB) task in which task-irrelevant disgust-eliciting, fear-eliciting, or neutral images preceded a target by 200, 500, or 800 milliseconds (i.e., lag two, five and eight respectively). They did so twice - once while not exposed to an odor, and once while exposed to either an odor that elicited disgust or an odor that did not - and completed a measure of disgust sensitivity. Results indicate that disgust-eliciting visual stimuli produced a greater attentional blink than neutral visual stimuli at lag two and a greater attentional blink than fear-eliciting visual stimuli at both lag two and at lag five. Neither the odor manipulations nor individual differences measures moderated this effect. We propose that visual attention is engaged for a longer period of time following disgust-eliciting stimuli because covert processes automatically initiate the evaluation of pathogen threats. The fact that state and trait pathogen avoidance do not influence this temporal attentional bias suggests that early attentional processing of pathogen cues is initiated independent from the context in which such cues are perceived.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Taolin Chen ◽  
Keith M. Kendrick ◽  
Chunliang Feng ◽  
Shiyue Sun ◽  
Xun Yang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document