scholarly journals When more is more: redundant modifiers can facilitate visual search

Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.

2020 ◽  
Author(s):  
Gwendolyn L Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al., 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually-relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1=48, NExp. 2=48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the maxim of quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al., (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension.


i-Perception ◽  
2018 ◽  
Vol 9 (4) ◽  
pp. 204166951879624 ◽  
Author(s):  
Siyi Chen ◽  
Lucas Schnabl ◽  
Hermann J. Müller ◽  
Markus Conci

When searching for a target object in cluttered environments, our visual system appears to complete missing parts of occluded objects—a mechanism known as “amodal completion.” This study investigated how different variants of completion influence visual search for an occluded target object. In two experiments, participants searched for a target among distractors in displays that either presented composite objects (notched shapes abutting an occluding square) or corresponding simple objects. The results showed enhanced search performance when composite objects were interpreted in terms of a globally completed whole. This search benefit for global completions was found to be dependent on the availability of a coherent, informative simple-object context. Overall, these findings suggest that attentional guidance in visual search may be based on a target “template” that represents a globally completed image of the occluded (target) object in accordance with prior experience.


2021 ◽  
Author(s):  
Xinger Yu ◽  
Joy J. Geng

Theories of attention hypothesize the existence of an "attentional" or "target" template that contains task-relevant information in memory when searching for an object. The target template contributes to visual search by directing visual attention towards potential targets and serving as a decisional boundary for target identification. However, debate still exists regarding how template information is stored in the human brain. Here, we conducted a pattern-based fMRI study to assess how template information is encoded to optimize target-match decisions during visual search. To ensure that match decisions reflect visual search demands, we used a visual search paradigm in which all distractors were linearly separable but highly similar to the target and were known to shift the target representation away from the distractor features (Yu & Geng, 2019). In a separate match-to-sample probe task, we measured the target representation used for match decisions across two resting state networks that have long been hypothesized to maintain and control target information: the frontoparietal control network (FPCN) and the visual network (VisN). Our results showed that lateral prefrontal cortex in FPCN maintained the context-dependent "off-veridical" template; in contrast, VisN encoded a veridical copy of the target feature during match decisions. By using behavioral drift diffusion modeling, we verified that the decision criterion during visual search and the probe task relied on a common biased target template. Taken together, our results suggest that sensory-veridical information is transformed in lateral prefrontal cortex into an adaptive code of target-relevant information that optimizes decision processes during visual search.


2020 ◽  
Author(s):  
Nir Shalev ◽  
Sage Boettcher ◽  
Hannah Wilkinson ◽  
Gaia Scerif ◽  
Anna C. Nobre

It is believed that children have difficulties in guiding attention while facing distraction. However, developmental accounts of spatial attention rely on traditional search designs using static displays. In real life, dynamic environments can embed regularities that afford anticipation and benefit performance. We developed a dynamic visual-search task to test the ability of children to benefit from spatio-temporal regularities to detect goal-relevant targets appearing within an extended dynamic context amidst irrelevant distracting stimuli. We compared children and adults in detecting predictable vs. unpredictable targets fading in and out among competing distracting stimuli. While overall search performance was poorer in children, both groups detected more predictable targets. This effect was confined to task-relevant information. Additionally, we report how predictions are related to individual differences in attention. Altogether, our results indicate a striking capacity of prediction-led guidance towards task-relevant information in dynamic environments, refining traditional views about poor goal-driven attention in childhood.


Author(s):  
Samia Hussein

The present study examined the effect of scene context on guidance of attention during visual search in real‐world scenes. Prior research has demonstrated that when searching for an object, attention is usually guided to the region of a scene that most likely contains that target object. This study examined two possible mechanisms of attention that underlie efficient search: enhancement of attention (facilitation) and a deficiency of attention (inhibition). In this study, participants (N=20) were shown an object name and then required to search through scenes for the target while their eye movements were tracked. Scenes were divided into target‐relevant contextual regions (upper, middle, lower) and participants searched repeatedly in the same scene for different targets either in the same region or in different regions. Comparing repeated searches within the same scene across different regions, we expect to find that visual search is faster and more efficient (facilitation of attention) in regions of a scene where attention was previously deployed. At the same time, when searching across different regions, we expect searches to be slower and less efficient (inhibition of attention) because those regions were previously ignored. Results from this study help to better understand how mechanisms of visual attention operate within scene contexts during visual search. 


2018 ◽  
Author(s):  
Xinger Yu ◽  
Joy Geng

Theories of attention hypothesize the existence of an “attentional template” that contains target features in working or long-term memory. It is often assumed that the template contents are veridical, but recent studies have found that this is not true when the distractor set is linearly separable from the target (e.g., all distractors are “yellower” than an orange colored target). In such cases, the target representation in memory shifts away from distractor features (Navalpakkam & Itti, 2007) and develop a sharper boundary with distractors (Geng, DiQuattro & Helm, 2017). These changes in the target template are presumed to increase the target-to-distractor psychological distinctiveness and lead to better attentional selection, but it remains unclear what characteristics of the distractor context produce shifting vs. sharpening. Here, we test the hypothesis that the template representation shifts whenever the distractor set (i.e., all of the distractors) is linearly separable from the target, but that asymmetrical sharpening only occurs when linearly separable distractors are highly target-similar. Our results were consistent, suggesting that template shifting and asymmetrical sharpening are two mechanisms that increase the representational distinctiveness of targets from expected distractors and improve visual search performance.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Alejandro Lleras ◽  
Zhiyuan Wang ◽  
Anna Madison ◽  
Simona Buetti

Recently, Wang, Buetti and Lleras (2017) developed an equation to predict search performance in heterogeneous visual search scenes (i.e., multiple types of non-target objects simultaneously present) based on parameters observed when participants perform search in homogeneous scenes (i.e., when all non-target objects are identical to one another). The equation was based on a computational model where every item in the display is processed with unlimited capacity and independently of one another, with the goal of determining whether the item is likely to be a target or not. The model was tested in two experiments using real-world objects. Here, we extend those findings by testing the predictive power of the equation to simpler objects. Further, we compare the model’s performance under two stimulus arrangements: spatially-intermixed (items randomly placed around the scene) and spatially-segregated displays (identical items presented near each other). This comparison allowed us to isolate and quantify the facilitatory effect of processing displays that contain identical items (homogeneity facilitation), a factor that improves performance in visual search above-and-beyond target-distractor dissimilarity. The results suggest that homogeneity facilitation effects in search arise from local item-to-item interaction (rather than by rejecting items as “groups”) and that the strength of those interactions might be determined by stimulus complexity (with simpler stimuli producing stronger interactions and thus, stronger homogeneity facilitation effects).


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Zhiyuan Wang ◽  
Simona Buetti ◽  
Alejandro Lleras

Previous work in our lab has demonstrated that efficient visual search with a fixed target has a reaction time by set size function that is best characterized by logarithmic curves. Further, the steepness of these logarithmic curves is determined by the similarity between target and distractor items (Buetti et al., 2016). A theoretical account of these findings was proposed, namely that a parallel, unlimited capacity, exhaustive processing architecture is underlying such data. Here, we conducted two experiments to expand these findings to a set of real-world stimuli, in both homogeneous and heterogeneous search displays. We used computational simulations of this architecture to identify a way to predict RT performance in heterogeneous search using parameters estimated from homogeneous search data. Further, by examining the systematic deviation from our predictions in the observed data, we found evidence that early visual processing for individual items is not independent. Instead, items in homogeneous displays seemed to facilitate each other’s processing by a multiplicative factor. These results challenge previous accounts of heterogeneity effects in visual search, and demonstrate the explanatory and predictive power of an approach that combines computational simulations and behavioral data to better understand performance in visual search.


2019 ◽  
Author(s):  
Bria Long ◽  
Mariko Moher ◽  
Susan Carey ◽  
Talia Konkle

By adulthood, animacy and object size jointly structure neural responses in visual cortex and influence perceptual similarity computations. Here, we take a first step in asking about the development of these aspects of cognitive architecture by probing whether animacy and object size are reflected in perceptual similarity computations by the preschool years. We used visual search performance as an index of perceptual similarity, as research with adults suggests search is slower when distractors are perceptually similar to the target. Preschoolers found target pictures more quickly when targets differed from distractor pictures in either animacy (Experiment 1) or in real-world size (Experiment 2; the pictures themselves were all the same size), versus when they do not. Taken together, these results suggest that the visual system has abstracted perceptual features for animates vs. inanimates and big vs. small objects as classes by the preschool years and call for further research exploring the development of these perceptual representations and their consequences for neural organization in childhood.


Sign in / Sign up

Export Citation Format

Share Document