The Effect of Scene Context on Visual Search in Real‐World Scenes

Author(s):  
Samia Hussein

The present study examined the effect of scene context on guidance of attention during visual search in real‐world scenes. Prior research has demonstrated that when searching for an object, attention is usually guided to the region of a scene that most likely contains that target object. This study examined two possible mechanisms of attention that underlie efficient search: enhancement of attention (facilitation) and a deficiency of attention (inhibition). In this study, participants (N=20) were shown an object name and then required to search through scenes for the target while their eye movements were tracked. Scenes were divided into target‐relevant contextual regions (upper, middle, lower) and participants searched repeatedly in the same scene for different targets either in the same region or in different regions. Comparing repeated searches within the same scene across different regions, we expect to find that visual search is faster and more efficient (facilitation of attention) in regions of a scene where attention was previously deployed. At the same time, when searching across different regions, we expect searches to be slower and less efficient (inhibition of attention) because those regions were previously ignored. Results from this study help to better understand how mechanisms of visual attention operate within scene contexts during visual search. 

2020 ◽  
Author(s):  
Amir H. Meghdadi ◽  
Barry Giesbrecht ◽  
Miguel P Eckstein

AbstractThe use of scene context is a powerful way by which biological organisms guide and facilitate visual search. Although many studies have shown enhancements of target-related electroencephalographic activity (EEG) with synthetic cues, there have been fewer studies demonstrating such enhancements during search with scene context and objects in real world scenes. Here, observers covertly searched for a target in images of real scenes while we used EEG to measure the steady state visual evoked response to objects flickering at different frequencies. The target appeared in its typical contextual location or out of context while we controlled for low-level properties of the image including target saliency against the background and retinal eccentricity. A pattern classifier using EEG activity at the relevant modulated frequencies showed target detection accuracy increased when the target was in a contextually appropriate location. A control condition for which observers searched the same images for a different target orthogonal to the contextual manipulation, resulted in no effects of scene context on classifier performance, confirming that image properties cannot explain the contextual modulations of neural activity. Pattern classifier decisions for individual images was also related to the aggregated observer behavioral decisions for individual images. Together, these findings demonstrate target-related neural responses are modulated by scene context during visual search with real world scenes and can be related to behavioral search decisions.Significance StatementContextual relationships among objects are fundamental for humans to find objects in real world scenes. Although there is a larger literature understanding the brain mechanisms when a target appears at a location indicated by a synthetic cue such as an arrow or box, less is known about how the scene context modulates target-related neural activity. Here we show how neural activity predictive of the presence of a searched object in cluttered real scenes increases when the target object appears at a contextual location and diminishes when it appears at a place that is out of context. The results increase our understanding of how the brain processes real scenes and how context modulates object processing.


Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


2007 ◽  
Vol 47 (19) ◽  
pp. 2521-2530 ◽  
Author(s):  
Chloé Prado ◽  
Matthieu Dubois ◽  
Sylviane Valdois

2009 ◽  
Vol 21 (2) ◽  
pp. 246-258 ◽  
Author(s):  
Jonathan S. A. Carriere ◽  
Daniel Eaton ◽  
Michael G. Reynolds ◽  
Mike J. Dixon ◽  
Daniel Smilek

For individuals with grapheme–color synesthesia, achromatic letters and digits elicit vivid perceptual experiences of color. We report two experiments that evaluate whether synesthesia influences overt visual attention. In these experiments, two grapheme–color synesthetes viewed colored letters while their eye movements were monitored. Letters were presented in colors that were either congruent or incongruent with the synesthetes' colors. Eye tracking analysis showed that synesthetes exhibited a color congruity bias—a propensity to fixate congruently colored letters more often and for longer durations than incongruently colored letters—in a naturalistic free-viewing task. In a more structured visual search task, this congruity bias caused synesthetes to rapidly fixate and identify congruently colored target letters, but led to problems in identifying incongruently colored target letters. The results are discussed in terms of their implications for perception in synesthesia.


Author(s):  
Megan H. Papesh ◽  
Michael C. Hout ◽  
Juan D. Guevara Pinto ◽  
Arryn Robbins ◽  
Alexis Lopez

AbstractDomain-specific expertise changes the way people perceive, process, and remember information from that domain. This is often observed in visual domains involving skilled searches, such as athletics referees, or professional visual searchers (e.g., security and medical screeners). Although existing research has compared expert to novice performance in visual search, little work has directly documented how accumulating experiences change behavior. A longitudinal approach to studying visual search performance may permit a finer-grained understanding of experience-dependent changes in visual scanning, and the extent to which various cognitive processes are affected by experience. In this study, participants acquired experience by taking part in many experimental sessions over the course of an academic semester. Searchers looked for 20 categories of targets simultaneously (which appeared with unequal frequency), in displays with 0–3 targets present, while having their eye movements recorded. With experience, accuracy increased and response times decreased. Fixation probabilities and durations decreased with increasing experience, but saccade amplitudes and visual span increased. These findings suggest that the behavioral benefits endowed by expertise emerge from oculomotor behaviors that reflect enhanced reliance on memory to guide attention and the ability to process more of the visual field within individual fixations.


2009 ◽  
Vol 62 (8) ◽  
pp. 1457-1506 ◽  
Author(s):  
Keith Rayner

Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.


Sign in / Sign up

Export Citation Format

Share Document