scholarly journals Scene semantics involuntarily guide attention during visual search

2019 ◽  
Author(s):  
Taylor R. Hayes ◽  
John M. Henderson

During scene viewing, is attention primarily guided by low-level image salience or by high-level semantics? Recent evidence suggests that overt attention in scenes is primarily guided by semantic features. Here we examined whether the attentional priority given to meaningful scene regions is involuntary. Participants completed a scene-independent visual search task in which they searched for superimposed letter targets whose locations were orthogonal to both the underlying scene semantics and image salience. Critically, the analyzed scenes contained no targets, and participants were unaware of this manipulation. We then directly compared how well the distribution of semantic features and image salience accounted for the overall distribution of overt attention. The results showed that even when the task was completely independent from the scene semantics and image salience, semantics explained significantly more variance in attention than image salience and more than expected by chance. This suggests that salient image features were effectively suppressed in favor of task goals, but semantic features were not suppressed. The semantic bias was present from the very first fixation and increased non-monotonically over the course of viewing. These findings suggest that overt attention in scenes is involuntarily guided by scene semantics.

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2017 ◽  
Author(s):  
Johannes J. Fahrenfort ◽  
Jonathan Van Leeuwen ◽  
Joshua J. Foster ◽  
Edward Awh ◽  
Christian N.L. Olivers

AbstractWorking memory is the function by which we temporarily maintain information to achieve current task goals. Models of working memory typically debate where this information is stored, rather than how it is stored. Here we ask instead what neural mechanisms are involved in storage, and how these mechanisms change as a function of task goals. Participants either had to reproduce the orientation of a memorized bar (continuous recall task), or identify the memorized bar in a search array (visual search task). The sensory input and retention interval were identical in both tasks. Next, we used decoding and forward modeling on multivariate electroencephalogram (EEG) and time-frequency decomposed EEG to investigate which neural signals carry more informational content during the retention interval. In the continuous recall task, working memory content was preferentially carried by induced oscillatory alpha-band power, while in the visual search task it was more strongly carried by the distribution of evoked (consistently elevated and non-oscillatory) EEG activity. To show the independence of these two signals, we were able to remove informational content from one signal without affecting informational content in the other. Finally, we show that the tuning characteristics of both signals change in opposite directions depending on the current task goal. We propose that these signals reflect oscillatory and elevated firing-rate mechanisms that respectively support location-based and object-based maintenance. Together, these data challenge current models of working memory that place storage in particular regions, but rather emphasize the importance of different distributed maintenance signals depending on task goals.Significance statement (120 words)Without realizing, we are constantly moving things in and out of our mind’s eye, an ability also referred to as ‘working memory’. Where did I put my screwdriver? Do we still have milk in the fridge? A central question in working memory research is how the brain maintains this information temporarily. Here we show that different neural mechanisms are involved in working memory depending on what the memory is used for. For example, remembering what a bottle of milk looks like invokes a different neural mechanism from remembering how much milk it contains: the first one primarily involved in being able to find the object, and the other one involving spatial position, such as the milk level in the bottle.


2006 ◽  
Vol 44 (8) ◽  
pp. 1137-1145 ◽  
Author(s):  
Oren Kaplan ◽  
Reuven Dar ◽  
Lirona Rosenthal ◽  
Haggai Hermesh ◽  
Mendel Fux ◽  
...  

2003 ◽  
Vol 41 (10) ◽  
pp. 1365-1386 ◽  
Author(s):  
Steven S. Shimozaki ◽  
Mary M. Hayhoe ◽  
Gregory J. Zelinsky ◽  
Amy Weinstein ◽  
William H. Merigan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document