scholarly journals Visual search for object categories is predicted by the representational architecture of high-level visual cortex

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.

SLEEP ◽  
2018 ◽  
Vol 41 (suppl_1) ◽  
pp. A252-A252
Author(s):  
E Giora ◽  
A Galbiati ◽  
M Zucconi ◽  
L Ferini-Strambi

2005 ◽  
Vol 16 (4) ◽  
pp. 275-281 ◽  
Author(s):  
Steven L. Franconeri ◽  
Andrew Hollingworth ◽  
Daniel J. Simons

The visual system relies on several heuristics to direct attention to important locations and objects. One of these mechanisms directs attention to sudden changes in the environment. Although a substantial body of research suggests that this capture of attention occurs only for the abrupt appearance of a new perceptual object, more recent evidence shows that some luminance-based transients (e.g., motion and looming) and some types of brightness change also capture attention. These findings show that new objects are not necessary for attention capture. The present study tested whether they are even sufficient. That is, does a new object attract attention because the visual system is sensitive to new objects or because it is sensitive to the transients that new objects create? In two experiments using a visual search task, new objects did not capture attention unless they created a strong local luminance transient.


2021 ◽  
Author(s):  
Jie Zhang ◽  
Xiaocang Zhu ◽  
Shanshan Wang ◽  
Hossein Esteky ◽  
Yonghong Tian ◽  
...  

Visual search depends on both the foveal and peripheral visual system, yet the foveal attention mechanisms is still lack of insights. We simultaneously recorded the foveal and peripheral activities in V4, IT and LPFC, while monkeys performed a category-based visual search task. Feature attention enhanced responses of Face-selective, House-selective, and Non-selective foveal cells in visual cortex. While foveal attention effects appeared no matter the peripheral attention effects, paying attention to the foveal stimulus dissipated the peripheral feature attentional effects, and delayed the peripheral spatial attentional effects. When target features appeared both in the foveal and the peripheral, feature attention effects seemed to occur predominately in the foveal, which might not distribute across the visual field according to common view of distributed feature attention effects. As a result, the parallel attentive process seemed to occur during distractor fixations, while the serial process predominated during target fixations in visual search.


2012 ◽  
Vol 65 (6) ◽  
pp. 1068-1085 ◽  
Author(s):  
Gary Lupyan ◽  
Daniel Swingley

People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing—for example, actually hearing “chair” compared to simply thinking about a chair can temporarily make the visual system a better “chair detector”. Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.


1996 ◽  
Vol 8 (4) ◽  
pp. 311-327 ◽  
Author(s):  
Marius Usher ◽  
Ernst Niebur

We propose a neural model for object-oriented attention in which various visual stimuli (shapes, colors, letters, etc.) are represented by competing, mutually inhibitory, cell assemblies. The model's response to a sequence of cue and target stimuli mimics the neural responses in infero temporal (IT) visual cortex of monkeys performing a visual search task: enhanced response during the display of the stimulus, which decays but remains above a spontaneous rate after the cue disappears. When, subsequently, a display consisting of the target and several distractors is presented, the activity of all stimulus-driven cells is initially enhanced. After a short period of time, however, the activity of the cell assembly representing the cue stimulus is enhanced while the activity of the distractors decays because of mutual competition and a small top-down “expectational” input. The model fits the measured delayed activity in IT-cortex, recently reported by Chelazzi, Miller, Duncan, and Desimone (1993a), and we suggest that such a process, which is largely independent of the number of distractors, may be used by the visual system for selecting an expected target (appearing at an uncertain location) among distractors.


2019 ◽  
Author(s):  
Taylor R. Hayes ◽  
John M. Henderson

During scene viewing, is attention primarily guided by low-level image salience or by high-level semantics? Recent evidence suggests that overt attention in scenes is primarily guided by semantic features. Here we examined whether the attentional priority given to meaningful scene regions is involuntary. Participants completed a scene-independent visual search task in which they searched for superimposed letter targets whose locations were orthogonal to both the underlying scene semantics and image salience. Critically, the analyzed scenes contained no targets, and participants were unaware of this manipulation. We then directly compared how well the distribution of semantic features and image salience accounted for the overall distribution of overt attention. The results showed that even when the task was completely independent from the scene semantics and image salience, semantics explained significantly more variance in attention than image salience and more than expected by chance. This suggests that salient image features were effectively suppressed in favor of task goals, but semantic features were not suppressed. The semantic bias was present from the very first fixation and increased non-monotonically over the course of viewing. These findings suggest that overt attention in scenes is involuntarily guided by scene semantics.


Author(s):  
Kirsten C.S. Adam ◽  
John T. Serences

AbstractTo find important objects, we must focus on our goals, ignore distractions, and take our changing environment into account. This is formalized in models of visual search whereby goal-driven, stimulus-driven and history-driven factors are integrated into a priority map that guides attention. History is invoked to explain behavioral effects that are neither wholly goal-driven nor stimulus-driven, but whether history likewise alters goal-driven and/or stimulus-driven signatures of neural priority is unknown. We measured fMRI responses in human visual cortex during a visual search task where trial history was manipulated (colors switched unpredictably or repeated). History had a near-constant impact on responses to singleton distractors, but not targets, from V1 through parietal cortex. In contrast, history-independent target enhancement was absent in V1 but increased across regions. Our data suggest that history does not alter goal-driven search templates, but rather modulates canonically stimulus-driven sensory responses to create a temporally-integrated representation of priority.


2021 ◽  
Author(s):  
Sushrut Thorat ◽  
Marius V. Peelen

Feature-based attention supports the selection of goal-relevant stimuli by enhancing the visual processing of attended features. A defining property of feature-based attention is that it modulates visual processing beyond the focus of spatial attention. Previous work has reported such spatially-global effects for low-level features such as color and orientation, as well as for faces. Here, using fMRI, we provide evidence for spatially-global attentional modulation for human bodies. Participants were cued to search for one of six object categories in two vertically-aligned images. Two additional, horizontally-aligned, images were simultaneously presented but were never task-relevant across three experimental sessions. Analyses time-locked to the objects presented in these task-irrelevant images revealed that responses evoked by body silhouettes were modulated by the participants' top-down attentional set, becoming more body-selective when participants searched for bodies in the task-relevant images. These effects were observed both in univariate analyses of the body-selective cortex and in multivariate analyses of the object-selective visual cortex. Additional analyses showed that this modulation reflected response gain rather than a bias induced by the cues, and that it reflected enhancement of body responses rather than suppression of non-body responses. Finally, the features of early layers of a convolutional neural network trained for object recognition could not be used to accurately categorize body silhouettes, indicating that the fMRI results were unlikely to reflect selection based on low-level features. These findings provide the first evidence for spatially-global feature-based attention for human bodies, linking this modulation to body representations in high-level visual cortex.


Sign in / Sign up

Export Citation Format

Share Document