scholarly journals EEG signatures of contextual influences on visual search with real scenes

2020 ◽  
Author(s):  
Amir H. Meghdadi ◽  
Barry Giesbrecht ◽  
Miguel P Eckstein

AbstractThe use of scene context is a powerful way by which biological organisms guide and facilitate visual search. Although many studies have shown enhancements of target-related electroencephalographic activity (EEG) with synthetic cues, there have been fewer studies demonstrating such enhancements during search with scene context and objects in real world scenes. Here, observers covertly searched for a target in images of real scenes while we used EEG to measure the steady state visual evoked response to objects flickering at different frequencies. The target appeared in its typical contextual location or out of context while we controlled for low-level properties of the image including target saliency against the background and retinal eccentricity. A pattern classifier using EEG activity at the relevant modulated frequencies showed target detection accuracy increased when the target was in a contextually appropriate location. A control condition for which observers searched the same images for a different target orthogonal to the contextual manipulation, resulted in no effects of scene context on classifier performance, confirming that image properties cannot explain the contextual modulations of neural activity. Pattern classifier decisions for individual images was also related to the aggregated observer behavioral decisions for individual images. Together, these findings demonstrate target-related neural responses are modulated by scene context during visual search with real world scenes and can be related to behavioral search decisions.Significance StatementContextual relationships among objects are fundamental for humans to find objects in real world scenes. Although there is a larger literature understanding the brain mechanisms when a target appears at a location indicated by a synthetic cue such as an arrow or box, less is known about how the scene context modulates target-related neural activity. Here we show how neural activity predictive of the presence of a searched object in cluttered real scenes increases when the target object appears at a contextual location and diminishes when it appears at a place that is out of context. The results increase our understanding of how the brain processes real scenes and how context modulates object processing.

Author(s):  
Samia Hussein

The present study examined the effect of scene context on guidance of attention during visual search in real‐world scenes. Prior research has demonstrated that when searching for an object, attention is usually guided to the region of a scene that most likely contains that target object. This study examined two possible mechanisms of attention that underlie efficient search: enhancement of attention (facilitation) and a deficiency of attention (inhibition). In this study, participants (N=20) were shown an object name and then required to search through scenes for the target while their eye movements were tracked. Scenes were divided into target‐relevant contextual regions (upper, middle, lower) and participants searched repeatedly in the same scene for different targets either in the same region or in different regions. Comparing repeated searches within the same scene across different regions, we expect to find that visual search is faster and more efficient (facilitation of attention) in regions of a scene where attention was previously deployed. At the same time, when searching across different regions, we expect searches to be slower and less efficient (inhibition of attention) because those regions were previously ignored. Results from this study help to better understand how mechanisms of visual attention operate within scene contexts during visual search. 


Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


2018 ◽  
Author(s):  
Anouk M. van Loon ◽  
Katya Olmos Solis ◽  
Johannes J. Fahrenfort ◽  
Christian N. L. Olivers

AbstractAdaptive behavior requires the separation of current from future goals in working memory. We used fMRI of object-selective cortex to determine the representational (dis)similarities of memory representations serving current and prospective perceptual tasks. Participants remembered an object drawn from three possible categories as the target for one of two consecutive visual search tasks. A cue indicated whether the target object should be looked for first (currently relevant), second (prospectively relevant), or if it could be forgotten (irrelevant). Prior to the first search, representations of current, prospective and irrelevant objects were similar, with strongest decoding for current representations compared to prospective (Experiment 1) and irrelevant (Experiment 2). Remarkably, during the first search, prospective representations could also be decoded, but revealed anti-correlated voxel patterns compared to currently relevant representations of the same category. We propose that the brain separates current from prospective memories within the same neuronal ensembles through opposite representational patterns.


2011 ◽  
Vol 11 (11) ◽  
pp. 1320-1320
Author(s):  
E. Pereira ◽  
M. Castelhano

2015 ◽  
Vol 27 (5) ◽  
pp. 902-912 ◽  
Author(s):  
Rebecca Nako ◽  
Tim J. Smith ◽  
Martin Eimer

Visual search is controlled by representations of target objects (attentional templates). Such templates are often activated in response to verbal descriptions of search targets, but it is unclear whether search can be guided effectively by such verbal cues. We measured ERPs to track the activation of attentional templates for new target objects defined by word cues. On each trial run, a word cue was followed by three search displays that contained the cued target object among three distractors. Targets were detected more slowly in the first display of each trial run, and the N2pc component (an ERP marker of attentional target selection) was attenuated and delayed for the first relative to the two successive presentations of a particular target object, demonstrating limitations in the ability of word cues to activate effective attentional templates. N2pc components to target objects in the first display were strongly affected by differences in object imageability (i.e., the ability of word cues to activate a target-matching visual representation). These differences were no longer present for the second presentation of the same target objects, indicating that a single perceptual encounter is sufficient to activate a precise attentional template. Our results demonstrate the superiority of visual over verbal target specifications in the control of visual search, highlight the fact that verbal descriptions are more effective for some objects than others, and suggest that the attentional templates that guide search for particular real-world target objects are analog visual representations.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Anouk Mariette van Loon ◽  
Katya Olmos-Solis ◽  
Johannes Jacobus Fahrenfort ◽  
Christian NL Olivers

Adaptive behavior requires the separation of current from future goals in working memory. We used fMRI of object-selective cortex to determine the representational (dis)similarities of memory representations serving current and prospective perceptual tasks. Participants remembered an object drawn from three possible categories as the target for one of two consecutive visual search tasks. A cue indicated whether the target object should be looked for first (currently relevant), second (prospectively relevant), or if it could be forgotten (irrelevant). Prior to the first search, representations of current, prospective and irrelevant objects were similar, with strongest decoding for current representations compared to prospective (Experiment 1) and irrelevant (Experiment 2). Remarkably, during the first search, prospective representations could also be decoded, but revealed anti-correlated voxel patterns compared to currently relevant representations of the same category. We propose that the brain separates current from prospective memories within the same neuronal ensembles through opposite representational patterns.


2020 ◽  
Author(s):  
Gwendolyn L Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al., 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually-relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1=48, NExp. 2=48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the maxim of quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al., (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension.


Sign in / Sign up

Export Citation Format

Share Document