An EEG study of the combined effects of top-down and bottom-up attentional selection under varying task difficulty

2021 ◽  
Author(s):  
Einat Rashal ◽  
Mehdi Senoussi ◽  
Elisa Santandrea ◽  
Suliann Ben Hamed ◽  
Emiliano Macaluso ◽  
...  

This work reports an investigation of the effect of combined top-down and bottom-up attentional control sources, using known attention-related EEG components that are thought to reflect target selection (N2pc) and distractor suppression (PD), in easy and difficult visual search tasks.

2012 ◽  
Vol 25 (0) ◽  
pp. 158
Author(s):  
Pawel J. Matusz ◽  
Martin Eimer

We investigated whether top-down attentional control settings can specify task-relevant features in different sensory modalities (vision and audition). Two audiovisual search tasks were used where a spatially uninformative visual singleton cue preceded a target search array. In different blocks, participants searched for a visual target (defined by colour or shape in Experiments 1 and 2, respectively), or target defined by a combination of visual and auditory features (e.g., red target accompanied by a high-pitch tone). Spatial cueing effects indicative of attentional capture by target-matching visual singleton cues in the unimodal visual search task were reduced or completely eliminated when targets were audiovisually defined. The N2pc component (i.e. index attentional target selection in vision) triggered by these cues was reduced and delayed during search for audiovisual as compared to unimodal visual targets. These results provide novel evidence that the top-down control settings which guide attentional selectivity can include perceptual features from different sensory modalities.


2005 ◽  
Vol 93 (1) ◽  
pp. 337-351 ◽  
Author(s):  
Kirk G. Thompson ◽  
Narcisse P. Bichot ◽  
Takashi R. Sato

We investigated the saccade decision process by examining activity recorded in the frontal eye field (FEF) of monkeys performing 2 separate visual search experiments in which there were errors in saccade target choice. In the first experiment, the difficulty of a singleton search task was manipulated by varying the similarity between the target and distractors; errors were made more often when the distractors were similar to the target. On catch trials in which the target was absent the monkeys occasionally made false alarm errors by shifting gaze to one of the distractors. The second experiment was a popout color visual search task in which the target and distractor colors switched unpredictably across trials. Errors occurred most frequently on the first trial after the switch and less often on subsequent trials. In both experiments, FEF neurons selected the saccade goal on error trials, not the singleton target of the search array. Although saccades were made to the same stimulus locations, presaccadic activation and the magnitude of selection differed across trial conditions. The variation in presaccadic selective activity was accounted for by the variation in saccade probability across the stimulus–response conditions, but not by variations in saccade metrics. These results suggest that FEF serves as a saccade probability map derived from the combination of bottom-up and top-down influences. Peaks on this map represent the behavioral relevance of each item in the visual field rather than just reflecting saccade preparation. This map in FEF may correspond to the theoretical salience map of many models of attention and saccade target selection.


2013 ◽  
Vol 25 (5) ◽  
pp. 719-729 ◽  
Author(s):  
Rachel Wu ◽  
Gaia Scerif ◽  
Richard N. Aslin ◽  
Tim J. Smith ◽  
Rebecca Nako ◽  
...  

Visual search is often guided by top–down attentional templates that specify target-defining features. But search can also occur at the level of object categories. We measured the N2pc component, a marker of attentional target selection, in two visual search experiments where targets were defined either categorically (e.g., any letter) or at the item level (e.g., the letter C) by a prime stimulus. In both experiments, an N2pc was elicited during category search, in both familiar and novel contexts (Experiment 1) and with symbolic primes (Experiment 2), indicating that, even when targets are only defined at the category level, they are selected at early sensory-perceptual stages. However, the N2pc emerged earlier and was larger during item-based search compared with category-based search, demonstrating the superiority of attentional guidance by item-specific templates. We discuss the implications of these findings for attentional control and category learning.


2019 ◽  
Author(s):  
Arnab Biswas ◽  
Devpriya Kumar

Searching for things is an essential part of our everyday life. The way we search gives us clues on how our cognitive processes function. Scientists have used the visual search task to study attention, perception, and memory. Visual search performance depends upon a combination of stimulus-driven, bottom-up information, goal-oriented, top-down information, and selection history bias. It is difficult to separate these factors due to their close interaction. Our current study presents a paradigm to isolate the effects of top-down factors in visual search. In our experiments, we asked subjects to perform two different search tasks. A part of the total trials in each of these tasks had the same bottom-up information. That is, they had the same target, distractor, and target-distractor arrangement. We controlled for selection history bias by having an equivalent proportion of target types for all tasks and randomized the trial-order for each subject. We compared the mean response times for the critical trials, which had identical bottom-up information shared across the two pairs of tasks. The results showed a significant difference in mean response times of critical trials for both our experiments. Thus, this paradigm allows us to compare the difference in top-down guidance when controlling for bottom-up factors. Pairwise comparison of top-down guidance for different features given the same bottom-up information allows us to ask interesting questions such as, “Visual search guidance for which features can or cannot be easily increased by top-down processes?” Answers to these questions can further shed light on the ecological and evolutionary importance of such features in perception.


2010 ◽  
Vol 22 (5) ◽  
pp. 848-859 ◽  
Author(s):  
Roshan Cools ◽  
Robert Rogers ◽  
Roger A. Barker ◽  
Trevor W. Robbins

Cognitive dysfunction in Parkinson's disease (PD) has been hypothesized to reflect a failure of cortical control. In keeping with this hypothesis, some of the cognitive deficits in PD resemble those seen in patients with lesions in the lateral pFC, which has been associated with top–down attentional control. However, there is no direct evidence for a failure of top–down control mechanisms in PD. Here we fill this gap by demonstrating disproportionate control by bottom–up attention to dimensional salience during attentional set shifting. Patients needed significantly more trials to criterion than did controls when shifting to a low-salient dimension while, remarkably, needing significantly fewer trials to criterion than did controls when shifting to a high-salient dimension. Thus, attention was captured by bottom–up attention to salient information to a greater extent in patients than in controls. The results provide a striking reinterpretation of prior set-shifting data and provide the first direct evidence for a failure of top–down attentional control, resembling that seen after catecholamine depletion in the pFC.


10.2741/a503 ◽  
2000 ◽  
Vol 5 (3) ◽  
pp. d169-193 ◽  
Author(s):  
K. Sathian
Keyword(s):  
Top Down ◽  

2007 ◽  
Vol 60 (1) ◽  
pp. 120-136 ◽  
Author(s):  
Nick Donnelly ◽  
Kyle Cave ◽  
Rebecca Greenway ◽  
Julie A. Hadwin ◽  
Jim Stevenson ◽  
...  
Keyword(s):  
Top Down ◽  

1999 ◽  
Vol 61 (6) ◽  
pp. 1009-1023 ◽  
Author(s):  
Min-Shik Kim ◽  
Kyle R. Cave

Sign in / Sign up

Export Citation Format

Share Document