Setting Up the Target Template in Visual Search

2004 ◽  
Author(s):  
Timothy J. Vickery ◽  
Livia L. King ◽  
Yuhong Jiang
Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


2017 ◽  
Vol 17 (1) ◽  
pp. 36 ◽  
Author(s):  
Li Z. Sha ◽  
Roger W. Remington ◽  
Yuhong V. Jiang

2021 ◽  
Author(s):  
Xinger Yu ◽  
Joy J. Geng

Theories of attention hypothesize the existence of an "attentional" or "target" template that contains task-relevant information in memory when searching for an object. The target template contributes to visual search by directing visual attention towards potential targets and serving as a decisional boundary for target identification. However, debate still exists regarding how template information is stored in the human brain. Here, we conducted a pattern-based fMRI study to assess how template information is encoded to optimize target-match decisions during visual search. To ensure that match decisions reflect visual search demands, we used a visual search paradigm in which all distractors were linearly separable but highly similar to the target and were known to shift the target representation away from the distractor features (Yu & Geng, 2019). In a separate match-to-sample probe task, we measured the target representation used for match decisions across two resting state networks that have long been hypothesized to maintain and control target information: the frontoparietal control network (FPCN) and the visual network (VisN). Our results showed that lateral prefrontal cortex in FPCN maintained the context-dependent "off-veridical" template; in contrast, VisN encoded a veridical copy of the target feature during match decisions. By using behavioral drift diffusion modeling, we verified that the decision criterion during visual search and the probe task relied on a common biased target template. Taken together, our results suggest that sensory-veridical information is transformed in lateral prefrontal cortex into an adaptive code of target-relevant information that optimizes decision processes during visual search.


2013 ◽  
Vol 13 (9) ◽  
pp. 698-698
Author(s):  
J. Chang ◽  
J.-S. Hyun

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
John T. Wixted ◽  
Edward Vul ◽  
Laura Mickes ◽  
Brent M. Wilson

The simultaneous six-pack photo lineup is a standard eyewitness identification procedure, consisting of one police suspect plus five physically similar fillers. The photo lineup is either a target-present array (the suspect is guilty) or a target-absent array (the suspect is innocent). The eyewitness is asked to search the six photos in the array with respect to a target template stored in memory (namely, the memory of the perpetrator's face). If the witness determines that the perpetrator is in fact in the lineup (detection), then the next step is to specify the position of the perpetrator's face in the lineup (localization). The witness may also determine that the perpetrator is not present and reject the lineup. In other words, a police lineup is a detection-plus-localization visual search task. Signal detection concepts that have long guided thinking about visual search have recently had a significant impact on our understanding of police lineups. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Sign in / Sign up

Export Citation Format

Share Document