Spatial Attention and the Visual World: Brain Systems Underlying Space-Based and Object-Based Selective Attention

1999 ◽  
Author(s):  
Catherine A. Neely ◽  
Stephen R. Rao ◽  
Andrew R. Mayer ◽  
Thomas H. Carr
2004 ◽  
Vol 16 (6) ◽  
pp. 1085-1097 ◽  
Author(s):  
Xun He ◽  
Silu Fan ◽  
Ke Zhou ◽  
Lin Chen

In a previous study, Egly, Driver, and Rafal (1994) observed both space-and object-based components of visual selective attention. However, the mechanisms underlying these two components and the relationship between them are not well understood. In the present research, with a similar paradigm, these issues were addressed by manipulating cue validity. Behavioral results indicated the presence of both space-and object-based components under high cue validity, similar to the results of Egly et al.'s study. In addition, under low cue validity, the space-based component was absent, whereas the object-based component was maintained. Further event-related potential results demonstrated an object-based effect at a sensory level over the posterior areas of brain, and a space-based effect over the anterior region. The present data suggest that the space-and object-based components reflect mainly voluntary and reflexive mechanisms, respectively.


2000 ◽  
Vol 12 (supplement 2) ◽  
pp. 106-117 ◽  
Author(s):  
Catherine M. Arrington ◽  
Thomas H. Carr ◽  
Andrew R. Mayer ◽  
Stephen M. Rao

Objects play an important role in guiding spatial attention through a cluttered visual environment. We used event-related functional magnetic resonance imaging (ER-fMRI) to measure brain activity during cued discrimination tasks requiring subjects to orient attention either to a region bounded by an object (object-based spatial attention) or to an unbounded region of space (location-based spatial attention) in anticipation of an upcoming target. Comparison between the two tasks revealed greater activation when attention selected a region bounded by an object. This activation was strongly lateralized to the left hemisphere and formed a widely distributed network including (a) attentional structures in parietal and temporal cortex and thalamus, (b) ventral-stream object processing structures in occipital, inferior-temporal, and parahippocampal cortex, and (c) control structures in medial-and dorsolateral-prefrontal cortex. These results suggest that object-based spatial selection is achieved by imposing additional constraints over and above those processes already operating to achieve selection of an unbounded region. In addition, ER-fMRI methodology allowed a comparison of validly versus invalidly cued trials, thereby delineating brain structures involved in the reorientation of attention after its initial deployment proved incorrect. All areas of activation that differentiated between these two trial types resulted from greater activity during the invalid trials. This outcome suggests that all brain areas involved in attentional orienting and task performance in response to valid cues are also involved on invalid trials. During invalid trials, additional brain regions are recruited when a perceiver recovers from invalid cueing and reorients attention to a target appearing at an uncued location. Activated brain areas specific to attentional reorientation were strongly right-lateralized and included posterior temporal and inferior parietal regions previously implicated in visual attention processes, as well as prefrontal regions that likely subserve control processes, particularly related to inhibition of inappropriate responding.


2020 ◽  
Vol 34 (04) ◽  
pp. 3684-3692
Author(s):  
Eric Crawford ◽  
Joelle Pineau

The ability to detect and track objects in the visual world is a crucial skill for any intelligent agent, as it is a necessary precursor to any object-level reasoning process. Moreover, it is important that agents learn to track objects without supervision (i.e. without access to annotated training videos) since this will allow agents to begin operating in new environments with minimal human assistance. The task of learning to discover and track objects in videos, which we call unsupervised object tracking, has grown in prominence in recent years; however, most architectures that address it still struggle to deal with large scenes containing many objects. In the current work, we propose an architecture that scales well to the large-scene, many-object setting by employing spatially invariant computations (convolutions and spatial attention) and representations (a spatially local object specification scheme). In a series of experiments, we demonstrate a number of attractive features of our architecture; most notably, that it outperforms competing methods at tracking objects in cluttered scenes with many objects, and that it can generalize well to videos that are larger and/or contain more objects than videos encountered during training.


2019 ◽  
Vol 30 (4) ◽  
pp. 526-540 ◽  
Author(s):  
Nicole Hakim ◽  
Kirsten C. S. Adam ◽  
Eren Gunseli ◽  
Edward Awh ◽  
Edward K. Vogel

Complex cognition relies on both on-line representations in working memory (WM), said to reside in the focus of attention, and passive off-line representations of related information. Here, we dissected the focus of attention by showing that distinct neural signals index the on-line storage of objects and sustained spatial attention. We recorded electroencephalogram (EEG) activity during two tasks that employed identical stimulus displays but varied the relative demands for object storage and spatial attention. We found distinct delay-period signatures for an attention task (which required only spatial attention) and a WM task (which invoked both spatial attention and object storage). Although both tasks required active maintenance of spatial information, only the WM task elicited robust contralateral delay activity that was sensitive to mnemonic load. Thus, we argue that the focus of attention is maintained via a collaboration between distinct processes for covert spatial orienting and object-based storage.


1994 ◽  
Vol 5 (6) ◽  
pp. 380-383 ◽  
Author(s):  
Robert Egly ◽  
Robert Rafal ◽  
Jon Driver ◽  
Yves Starrveveld

Theories in cognitive science have debated whether visual selective attention is a space-based or object-based process To investigate this issue, we applied a new experimental paradigm that permits the simultaneous measurement of both space-based and object-based attention to a split-brain patient with disconnected cerebral hemispheres The data demonstrate both space-based and object-based components to the allocation of attention, and reveal that the two processes have different neural substrates These findings are related to previous research on split-brain and unilateral parietal patients


2019 ◽  
Author(s):  
Daria Kvasova ◽  
Salvador Soto-Faraco

AbstractRecent studies show that cross-modal semantic congruence plays a role in spatial attention orienting and visual search. However, the extent to which these cross-modal semantic relationships attract attention automatically is still unclear. At present the outcomes of different studies have been inconsistent. Variations in task-relevance of the cross-modal stimuli (from explicitly needed, to completely irrelevant) and the amount of perceptual load may account for the mixed results of previous experiments. In the present study, we addressed the effects of audio-visual semantic congruence on visuo-spatial attention across variations in task relevance and perceptual load. We used visual search amongst images of common objects paired with characteristic object sounds (e.g., guitar image and chord sound). We found that audio-visual semantic congruence speeded visual search times when the cross-modal objects are task relevant, or when they are irrelevant but presented under low perceptual load. Instead, when perceptual load is high, sounds fail to attract attention towards the congruent visual images. These results lead us to conclude that object-based crossmodal congruence does not attract attention automatically and requires some top-down processing.


Author(s):  
Anna C. (Kia) Nobre ◽  
M-Marsel Mesulam

Selective attention is essential for all aspects of cognition. Using the paradigmatic case of visual spatial attention, we present a theoretical account proposing the flexible control of attention through coordinated activity across a large-scale network of brain areas. It reviews evidence supporting top-down control of visual spatial attention by a distributed network, and describes principles emerging from a network approach. Stepping beyond the paradigm of visual spatial attention, we consider attentional control mechanisms more broadly. The chapter suggests that top-down biasing mechanisms originate from multiple sources and can be of several types, carrying information about receptive-field properties such as spatial locations or features of items; but also carrying information about properties that are not easily mapped onto receptive fields, such as the meanings or timings of items. The chapter considers how selective biases can operate on multiple slates of information processing, not restricted to the immediate sensory-motor stream, but also operating within internalized, short-term and long-term memory representations. Selective attention appears to be a general property of information processing systems rather than an independent domain within our cognitive make-up.


Author(s):  
Martin Eimer

Event-related brain potential (ERP) measures have made important contributions to our understanding of the mechanisms of selective attention. This chapter provides a selective and non-technical review of some of these contributions. It will concentrate mainly on research that has studied spatially selective attentional processing in vision, although research on crossmodal links in spatial attention will also be discussed. The main purpose of this chapter is to illustrate how ERP methods have helped to provide answers to major theoretical questions that have shaped research on selective attention in the past 40 years.


1999 ◽  
Vol 22 (3) ◽  
pp. 377-377
Author(s):  
Howard Egeth

Pylyshyn's argument is very similar to one made in the 1960s to the effect that vision may be influenced by spatial selective attention being directed to distinctive stimulus features, but not by mental set for meaning or membership in an ill-defined category. More recent work points to a special role for spatial attention in determining the contents of perception.


Sign in / Sign up

Export Citation Format

Share Document