scholarly journals Object-based target templates guide attention during visual search.

2018 ◽  
Vol 44 (9) ◽  
pp. 1368-1382 ◽  
Author(s):  
Nick Berggren ◽  
Martin Eimer
2012 ◽  
Vol 12 (9) ◽  
pp. 1156-1156
Author(s):  
A. Greenberg ◽  
M. Rosen ◽  
K. Zamora ◽  
E. Cutrone ◽  
M. Behrmann
Keyword(s):  

2020 ◽  
Vol 82 (6) ◽  
pp. 2909-2923 ◽  
Author(s):  
Bo-Yeong Won ◽  
Jason Haberman ◽  
Eliza Bliss-Moreau ◽  
Joy J. Geng

2011 ◽  
Vol 23 (9) ◽  
pp. 2231-2239 ◽  
Author(s):  
Carsten N. Boehler ◽  
Mircea A. Schoenfeld ◽  
Hans-Jochen Heinze ◽  
Jens-Max Hopf

Attention to one feature of an object can bias the processing of unattended features of that object. Here we demonstrate with ERPs in visual search that this object-based bias for an irrelevant feature also appears in an unattended object when it shares that feature with the target object. Specifically, we show that the ERP response elicited by a distractor object in one visual field is modulated as a function of whether a task-irrelevant color of that distractor is also present in the target object that is presented in the opposite visual field. Importantly, we find this modulation to arise with a delay of approximately 80 msec relative to the N2pc—a component of the ERP response that reflects the focusing of attention onto the target. In a second experiment, we demonstrate that this modulation reflects enhanced neural processing in the unattended object. These observations together facilitate the surprising conclusion that the object-based selection of irrelevant features is spatially global even after attention has selected the target object.


2004 ◽  
Vol 17 (5-6) ◽  
pp. 873-897 ◽  
Author(s):  
Linda J. Lanyon ◽  
Susan L. Denham

2021 ◽  
Author(s):  
Franziska Regnath ◽  
Sebastiaan Mathôt

AbstractThe adaptive gain theory (AGT) posits that activity in the locus coeruleus (LC) is linked to two behavioral modes: exploitation, characterized by focused attention on a single task; and exploration, characterized by a lack of focused attention and frequent switching between tasks. Furthermore, pupil size correlates with LC activity, such that large pupils indicate increased LC firing, and by extension also exploration behavior. Most evidence for this correlation in humans comes from complex behavior in game-like tasks. However, predictions of the AGT naturally extend to a very basic form of behavior: eye movements. To test this, we used a visual-search task. Participants searched for a target among many distractors, while we measured their pupil diameter and eye movements. The display was divided into four randomly generated regions of different colors. Although these regions were irrelevant to the task, participants were sensitive to their boundaries, and dwelled within regions for longer than expected by chance. Crucially, pupil size increased before eye movements that carried gaze from one region to another. We propose that eye movements that stay within regions (or objects) correspond to exploitation behavior, whereas eye movements that switch between regions (or objects) correspond to exploration behavior.Public Significance StatementWhen people experience increased arousal, their pupils dilate. The adaptive-gain theory proposes that pupil size reflects neural activity in the locus coeruleus (LC), which in turn is associated with two behavioral modes: a vigilant, distractible mode (“exploration”), and a calm, focused mode (“exploitation”). During exploration, pupils are larger and LC activity is higher than during exploitation. Here we show that the predictions of this theory generalize to eye movements: smaller pupils coincide with eye movements indicative of exploitation, while pupils slightly dilate just before make eye movements that are indicative of exploration.


2019 ◽  
Author(s):  
Daria Kvasova ◽  
Salvador Soto-Faraco

AbstractRecent studies show that cross-modal semantic congruence plays a role in spatial attention orienting and visual search. However, the extent to which these cross-modal semantic relationships attract attention automatically is still unclear. At present the outcomes of different studies have been inconsistent. Variations in task-relevance of the cross-modal stimuli (from explicitly needed, to completely irrelevant) and the amount of perceptual load may account for the mixed results of previous experiments. In the present study, we addressed the effects of audio-visual semantic congruence on visuo-spatial attention across variations in task relevance and perceptual load. We used visual search amongst images of common objects paired with characteristic object sounds (e.g., guitar image and chord sound). We found that audio-visual semantic congruence speeded visual search times when the cross-modal objects are task relevant, or when they are irrelevant but presented under low perceptual load. Instead, when perceptual load is high, sounds fail to attract attention towards the congruent visual images. These results lead us to conclude that object-based crossmodal congruence does not attract attention automatically and requires some top-down processing.


Sign in / Sign up

Export Citation Format

Share Document