Effects of Feature-selective and Spatial Attention at Different Stages of Visual Processing

2011 ◽  
Vol 23 (1) ◽  
pp. 238-246 ◽  
Author(s):  
Søren K. Andersen ◽  
Sandra Fuchs ◽  
Matthias M. Müller

We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.

2014 ◽  
Vol 27 (2) ◽  
pp. 139-160 ◽  
Author(s):  
Pia Ley ◽  
Brigitte Röder

The present study investigated whether effects of movement preparation and visual spatial attention on visual processing can be dissociated. Movement preparation and visual spatial attention were manipulated orthogonally in a dual-task design. Ten participants covertly prepared unimanual lateral arm movements to one hemifield, while attending to visual stimuli presented either in the same or in the hemifield opposite to the movement goal. Event-related potentials to task-irrelevant visual stimuli were analysed. Both joint and distinct modulations of visual ERPs by visual spatial attention and movement preparation were observed: The latencies of all analysed peaks (P1, N1, P2) were shorter for matching (in terms of direction of attention and movement) versus non-matching sensory–motor conditions. The P1 amplitude, as well, depended on the sensory–motor matching: The P1 was larger for non-matching compared to matching conditions. By contrast, the N1 amplitude showed additive effects of sensory attention and movement preparation: with attention and movement preparation directed towards the visual stimulus the N1 was largest, with both directed opposite to the stimulus the N1 was smallest. P2 amplitudes, instead, were only modulated by sensory attention. The present data show that movement preparation and sensory spatial attention are tightly linked and interrelated, showing joint modulations throughout stimulus processing. At the same time, however, our data argue against the idea of identity of the two systems. Instead, sensory spatial attention and movement preparation seem to be processed at least partially independently, though still exerting a combined influence on visual stimulus processing.


2010 ◽  
Vol 24 (3) ◽  
pp. 161-172 ◽  
Author(s):  
Edmund Wascher ◽  
C. Beste

Spatial selection of relevant information has been proposed to reflect an emergent feature of stimulus processing within an integrated network of perceptual areas. Stimulus-based and intention-based sources of information might converge in a common stage when spatial maps are generated. This approach appears to be inconsistent with the assumption of distinct mechanisms for stimulus-driven and top-down controlled attention. In two experiments, the common ground of stimulus-driven and intention-based attention was tested by means of event-related potentials (ERPs) in the human EEG. In both experiments, the processing of a single transient was compared to the selection of a physically comparable stimulus among distractors. While single transients evoked a spatially sensitive N1, the extraction of relevant information out of a more complex display was reflected in an N2pc. The high similarity of the spatial portion of these two components (Experiment 1), and the replication of this finding for the vertical axis (Experiment 2) indicate that these two ERP components might both reflect the spatial representation of relevant information as derived from the organization of perceptual maps, just at different points in time.


2006 ◽  
Vol 18 (6) ◽  
pp. 880-888 ◽  
Author(s):  
Markus Conci ◽  
Klaus Gramann ◽  
Hermann J. Müller ◽  
Mark A. Elliott

Illusory figure completion demonstrates the ability of the visual system to integrate information across gaps. Mechanisms that underlie figural emergence support the interpolation of contours and the filling-in of form information [Grossberg, S., & Mingolla, E. Neural dynamics of form perception: Boundary completion, illusory figures and neon colour spreading. Psychological Review, 92, 173–211, 1985]. Although both processes contribute to figure formation, visual search for an illusory target configuration has been shown to be susceptible to interfering form, but not contour, information [Conci, M., Müller, H. J., & Elliott, M. A. The contrasting impact of global and local object attributes on Kanizsa figure detection. Submitted]. Here, the physiological basis of form interference was investigated by recording event-related potentials elicited from contour- and surface-based distracter interactions with detection of a target Kanizsa figure. The results replicated the finding of form interference and revealed selection of the target and successful suppression of the irrelevant distracter to be reflected by amplitude differences in the N2pc component (240–340 msec). In conclusion, the observed component variations reflect processes of target selection on the basis of integrated form information resulting from figural completion processes.


2020 ◽  
Vol 14 ◽  
Author(s):  
Luiza Kirasirova ◽  
Vladimir Bulanov ◽  
Alexei Ossadtchi ◽  
Alexander Kolsanov ◽  
Vasily Pyatin ◽  
...  

A P300 brain-computer interface (BCI) is a paradigm, where text characters are decoded from event-related potentials (ERPs). In a popular implementation, called P300 speller, a subject looks at a display where characters are flashing and selects one character by attending to it. The selection is recognized as the item with the strongest ERP. The speller performs well when cortical responses to target and non-target stimuli are sufficiently different. Although many strategies have been proposed for improving the BCI spelling, a relatively simple one received insufficient attention in the literature: reduction of the visual field to diminish the contribution from non-target stimuli. Previously, this idea was implemented in a single-stimulus switch that issued an urgent command like stopping a robot. To tackle this approach further, we ran a pilot experiment where ten subjects operated a traditional P300 speller or wore a binocular aperture that confined their sight to the central visual field. As intended, visual field restriction resulted in a replacement of non-target ERPs with EEG rhythms asynchronous to stimulus periodicity. Changes in target ERPs were found in half of the subjects and were individually variable. While classification accuracy was slightly better for the aperture condition (84.3 ± 2.9%, mean ± standard error) than the no-aperture condition (81.0 ± 2.6%), this difference was not statistically significant for the entire sample of subjects (N = 10). For both the aperture and no-aperture conditions, classification accuracy improved over 4 days of training, more so for the aperture condition (from 72.0 ± 6.3% to 87.0 ± 3.9% and from 72.0 ± 5.6% to 97.0 ± 2.2% for the no-aperture and aperture conditions, respectively). Although in this study BCI performance was not substantially altered, we suggest that with further refinement this approach could speed up BCI operations and reduce user fatigue. Additionally, instead of wearing an aperture, non-targets could be removed algorithmically or with a hybrid interface that utilizes an eye tracker. We further discuss how a P300 speller could be improved by taking advantage of the different physiological properties of the central and peripheral vision. Finally, we suggest that the proposed experimental approach could be used in basic research on the mechanisms of visual processing.


2020 ◽  
Vol 25 (5) ◽  
pp. 237-248
Author(s):  
Maojin Liang ◽  
Jiahao Liu ◽  
Yuexin Cai ◽  
Fei Zhao ◽  
Suijun Chen ◽  
...  

Objective: The present study investigated the characteristics of visual processing in the auditory-associated cortex in adults with hearing loss using event-related potentials. Methods: Ten subjects with bilateral postlingual hearing loss were recruited. Ten age- and sex-matched normal-hearing subjects were included as controls. Visual (“sound” and “non-sound” photos)-evoked potentials were performed. The P170 response in the occipital area as well as N1 and N2 responses in FC3 and FC4 were analyzed. Results: Adults with hearing loss had higher P170 amplitudes, significantly higher N2 amplitudes, and shorter N2 latency in response to “sound” and “non-sound” photo stimuli at both FC3 and FC4, with the exception of the N2 amplitude which responded to “sound” photo stimuli at FC3. Further topographic mapping analysis revealed that patients had a large difference in response to “sound” and “non-sound” photos in the right frontotemporal area, starting from approximately 200 to 400 ms. Localization of source showed the difference to be located in the middle frontal gyrus region (BA10) at around 266 ms. Conclusions: The significantly stronger responses to visual stimuli indicate enhanced visual processing in the auditory-associated cortex in adults with hearing loss, which may be attributed to cortical visual reorganization involving the right frontotemporal cortex.


2001 ◽  
Vol 311 (3) ◽  
pp. 198-202 ◽  
Author(s):  
Shu Omoto ◽  
Yoshiyuki Kuroiwa ◽  
Mei Li ◽  
Hiroshi Doi ◽  
Megumi Shimamura ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document