scholarly journals Visual search guidance uses coarser template information than target-match decisions

2021 ◽  
Author(s):  
Xinger Yu ◽  
Joy Geng

When searching for an object, we use a target template in memory that contains task-relevant information to guide visual attention to potential targets and to determine the identity of attended objects. These processes in visual search have typically been assumed to rely on a common source of template information. However, our recent work (Yu, et al., in press) argued that attentional guidance and target-match decisions rely on different information during search, with guidance using a “fuzzier” version of the template compared to target decisions. However, that work was based on the special case of search for a target amongst linearly separable distractors (e.g., search for an orange target amongst yellower distractors). Real-world search targets, however, are infrequently linearly separable from distractors, and it remains unclear whether the differences between the precision of template information used for guidance compared to target decisions also applies under more typical conditions. In four experiments, we tested this question by varying distractor similarity during visual search and measuring the likelihood of attentional guidance to distractors and target misidentifications. We found that early attentional guidance is indeed less precise than that of subsequent match decisions under varying exposure durations and distractor set sizes. These results suggest that attentional guidance operates on a coarser code than decisions, perhaps because guidance is constrained by lower acuity in peripheral vision or the need to rapidly explore a wide region of space while decisions about selected objects are more precise to optimize decision accuracy.

Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


2021 ◽  
Author(s):  
Xinger Yu ◽  
Joy J. Geng

Theories of attention hypothesize the existence of an "attentional" or "target" template that contains task-relevant information in memory when searching for an object. The target template contributes to visual search by directing visual attention towards potential targets and serving as a decisional boundary for target identification. However, debate still exists regarding how template information is stored in the human brain. Here, we conducted a pattern-based fMRI study to assess how template information is encoded to optimize target-match decisions during visual search. To ensure that match decisions reflect visual search demands, we used a visual search paradigm in which all distractors were linearly separable but highly similar to the target and were known to shift the target representation away from the distractor features (Yu & Geng, 2019). In a separate match-to-sample probe task, we measured the target representation used for match decisions across two resting state networks that have long been hypothesized to maintain and control target information: the frontoparietal control network (FPCN) and the visual network (VisN). Our results showed that lateral prefrontal cortex in FPCN maintained the context-dependent "off-veridical" template; in contrast, VisN encoded a veridical copy of the target feature during match decisions. By using behavioral drift diffusion modeling, we verified that the decision criterion during visual search and the probe task relied on a common biased target template. Taken together, our results suggest that sensory-veridical information is transformed in lateral prefrontal cortex into an adaptive code of target-relevant information that optimizes decision processes during visual search.


i-Perception ◽  
2018 ◽  
Vol 9 (4) ◽  
pp. 204166951879624 ◽  
Author(s):  
Siyi Chen ◽  
Lucas Schnabl ◽  
Hermann J. Müller ◽  
Markus Conci

When searching for a target object in cluttered environments, our visual system appears to complete missing parts of occluded objects—a mechanism known as “amodal completion.” This study investigated how different variants of completion influence visual search for an occluded target object. In two experiments, participants searched for a target among distractors in displays that either presented composite objects (notched shapes abutting an occluding square) or corresponding simple objects. The results showed enhanced search performance when composite objects were interpreted in terms of a globally completed whole. This search benefit for global completions was found to be dependent on the availability of a coherent, informative simple-object context. Overall, these findings suggest that attentional guidance in visual search may be based on a target “template” that represents a globally completed image of the occluded (target) object in accordance with prior experience.


2020 ◽  
Author(s):  
Gwendolyn L Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al., 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually-relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1=48, NExp. 2=48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the maxim of quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al., (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension.


2020 ◽  
Author(s):  
Xinger Yu ◽  
Timothy D. Hanks ◽  
Joy Geng

When searching for a target object (e.g., a friend at a party), we engage in a continuous “look- identify” cycle in which we use known features (e.g., hair color) to guide attention and eye gaze towards potential targets and then to decide if it is indeed the target. Theories of attention refer to the information about the target in memory as the “target” or “attentional” template and typically characterize it as a single, fixed, source of information. However, this notion is challenged by a recent debate over how the target template is adjusted in response to linearly separable distractors (e.g., all distractors are “yellower” than an orange target). While there is agreement that the target representation is shifted away from distractors, some have argued that the shift is “relational” (Becker, 2010) while others have argued it is “optimal” (Navalpakkam & Itti, 2007; Yu & Geng, 2019). Here, we propose a novel resolution to this debate based on evidence that the initial guidance of attention uses a coarse code based on “relational” information, but subsequent decisions use an “optimal” representation that maximizes target-to-distractor distinctiveness. We suggest that template information differs in precision when guiding sensory selection and when making identity decisions during visual search (Wolfe, 2020a, 2020b).


2021 ◽  
pp. 095679762110322
Author(s):  
Xinger Yu ◽  
Timothy D. Hanks ◽  
Joy J. Geng

When searching for a target object, we engage in a continuous “look-identify” cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students ( Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.


2004 ◽  
Author(s):  
Timothy J. Vickery ◽  
Livia L. King ◽  
Yuhong Jiang

Aerospace ◽  
2021 ◽  
Vol 8 (7) ◽  
pp. 170
Author(s):  
Ricardo Palma Fraga ◽  
Ziho Kang ◽  
Jerry M. Crutchfield ◽  
Saptarshi Mandal

The role of the en route air traffic control specialist (ATCS) is vital to maintaining safety and efficiency within the National Airspace System (NAS). ATCSs must vigilantly scan the airspace under their control and adjacent airspaces using an En Route Automation Modernization (ERAM) radar display. The intent of this research is to provide an understanding of the expert controller visual search and aircraft conflict mitigation strategies that could be used as scaffolding methods during ATCS training. Interviews and experiments were conducted to elicit visual scanning and conflict mitigation strategies from the retired controllers who were employed as air traffic control instructors. The interview results were characterized and classified using various heuristics. In particular, representative visual scanpaths were identified, which accord with the interview results of the visual search strategies. The highlights of our findings include: (1) participants used systematic search patterns, such as circular, spiral, linear or quadrant-based, to extract operation-relevant information; (2) participants applied an information hierarchy when aircraft information was cognitively processed (altitude -> direction -> speed); (3) altitude or direction changes were generally preferred over speed changes when imminent potential conflicts were mitigated. Potential applications exist in the implementation of the findings into the training curriculum of candidates.


Author(s):  
Fallon Branch ◽  
Allison JoAnna Lewis ◽  
Isabella Noel Santana ◽  
Jay Hegdé

AbstractCamouflage-breaking is a special case of visual search where an object of interest, or target, can be hard to distinguish from the background even when in plain view. We have previously shown that naive, non-professional subjects can be trained using a deep learning paradigm to accurately perform a camouflage-breaking task in which they report whether or not a given camouflage scene contains a target. But it remains unclear whether such expert subjects can actually detect the target in this task, or just vaguely sense that the two classes of images are somehow different, without being able to find the target per se. Here, we show that when subjects break camouflage, they can also localize the camouflaged target accurately, even though they had received no specific training in localizing the target. The localization was significantly accurate when the subjects viewed the scene as briefly as 50 ms, but more so when the subjects were able to freely view the scenes. The accuracy and precision of target localization by expert subjects in the camouflage-breaking task were statistically indistinguishable from the accuracy and precision of target localization by naive subjects during a conventional visual search where the target ‘pops out’, i.e., is readily visible to the untrained eye. Together, these results indicate that when expert camouflage-breakers detect a camouflaged target, they can also localize it accurately.


Sign in / Sign up

Export Citation Format

Share Document