scholarly journals Modeling the Temporal Dynamics of IT Neurons in Visual Search: A Mechanism for Top-Down Selective Attention

1996 ◽  
Vol 8 (4) ◽  
pp. 311-327 ◽  
Author(s):  
Marius Usher ◽  
Ernst Niebur

We propose a neural model for object-oriented attention in which various visual stimuli (shapes, colors, letters, etc.) are represented by competing, mutually inhibitory, cell assemblies. The model's response to a sequence of cue and target stimuli mimics the neural responses in infero temporal (IT) visual cortex of monkeys performing a visual search task: enhanced response during the display of the stimulus, which decays but remains above a spontaneous rate after the cue disappears. When, subsequently, a display consisting of the target and several distractors is presented, the activity of all stimulus-driven cells is initially enhanced. After a short period of time, however, the activity of the cell assembly representing the cue stimulus is enhanced while the activity of the distractors decays because of mutual competition and a small top-down “expectational” input. The model fits the measured delayed activity in IT-cortex, recently reported by Chelazzi, Miller, Duncan, and Desimone (1993a), and we suggest that such a process, which is largely independent of the number of distractors, may be used by the visual system for selecting an expected target (appearing at an uncertain location) among distractors.

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


PLoS ONE ◽  
2017 ◽  
Vol 12 (9) ◽  
pp. e0184960 ◽  
Author(s):  
Marwen Belkaid ◽  
Nicolas Cuperlier ◽  
Philippe Gaussier

2010 ◽  
Vol 5 (8) ◽  
pp. 449-449
Author(s):  
B. R. Beutter ◽  
J. Toscano ◽  
L. S. Stone

2020 ◽  
pp. 174702182096626
Author(s):  
Lingxia Fan ◽  
Lin Zhang ◽  
Liuting Diao ◽  
Mengsi Xu ◽  
Ruiyang Chen ◽  
...  

Recent studies have demonstrated that in visual working memory (VWM), only items in an active state can guide attention. Further evidence has revealed that items with higher perceptual salience or items prioritised by a valid retro-cue in VWM tend to be in an active state. However, it is unclear which factor (perceptual salience or retro-cues) is more important for influencing the item state in VWM or whether the factors can act concurrently. Experiment 1 examined the role of perceptual salience by asking participants to hold two features with relatively different perceptual salience (colour vs. shape) in VWM while completing a visual search task. Guidance effects were found when either colour or both colour and shape in VWM matched one of the search distractors but not when shape matched. This demonstrated that the more salient feature in VWM can actively guide attention, while the less salient feature cannot. However, when shape in VWM was cued to be more relevant (prioritised) in Experiment 2, we found guidance effects in both colour-match and shape-match conditions. That is, both more salient but non-cued colour and less salient but cued shape could be active in VWM, such that attentional selection was affected by the matching colour or shape in the visual search task. This suggests that bottom-up perceptual salience and top-down retro-cues can jointly determine the active state in VWM.


2021 ◽  
pp. 174702182098635
Author(s):  
Hana Yabuki ◽  
Stephanie C Goodhew

Visual search is a psychological function integral to most people’s daily lives. The extent to which visual search efficiency, and in particular the ability to use top-down attention in visual search, changes across the lifespan has been the focus of ongoing research. Here we sought to understand how the ability to frequently and dynamically change the target in a conjunction search task was affected by ageing. To do this, we compared visual search performance of a group of younger and older adults under conditions in which the target type was determined by a cue and could change on trial-to-trial basis (Intermixed), versus when the target type was fixed for a block of trials (Blocked). Although older adults were overall slower at the conjunction visual search task, and both groups were slower in the Intermixed compared with the Blocked Condition, older adults were not disproportionately affected by the Intermixed relative to the Blocked conditions. These results indicate that the ability to frequently change the target of visual search is preserved in older adults. This conclusion is consistent with an emerging consensus that many aspects of visual search and top-down contributions to it are preserved across the lifespan. It is also consistent with a growing body of work which challenges the neurocognitive theories of ageing that predict sweeping deficits in complex top-down components of cognition.


2006 ◽  
Vol 17 (5) ◽  
pp. 387-392 ◽  
Author(s):  
Joan Y. Chiao ◽  
Hannah E. Heck ◽  
Ken Nakayama ◽  
Nalini Ambady

We examined whether or not priming racial identity would influence Black-White biracial individuals' ability to visually search for White and Black faces. Black, White, and biracial participants performed a visual search task in which the targets were Black or White faces. Before the task, the biracial participants were primed with either their Black or their White racial identity. All participant groups detected Black faces faster than White faces. Critically, the results also showed a racial-prime effect in biracial individuals: The magnitude of the search asymmetry was significantly different for those primed with their White identity and those primed with their Black identity. These findings suggest that top-down factors such as one's racial identity can influence mechanisms underlying the visual search for faces of different races.


2010 ◽  
Vol 22 (4) ◽  
pp. 640-654 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Anna Schubö

Two mechanisms are said to be responsible for guiding focal attention in visual selection: bottom–up, saliency-driven capture and top–down control. These mechanisms were examined with a paradigm that combined a visual search task with postdisplay probe detection. Two SOAs between the search display and probe onsets were introduced to investigate how attention was allocated to particular items at different points in time. The dynamic interplay between bottom–up and top–down mechanisms was investigated with ERP methodology. ERPs locked to the search displays showed that top–down control needed time to develop. N2pc indicated allocation of attention to the target item and not to the irrelevant singleton. ERPs locked to probes revealed modulations in the P1 component reflecting top–down control of focal attention at the long SOA. Early bottom–up effects were observed in the error rates at the short SOA. Taken together, the present results show that the top–down mechanism takes time to guide focal attention to the relevant target item and that it is potent enough to limit bottom–up attentional capture.


Sign in / Sign up

Export Citation Format

Share Document