Covert Visual Search

Author(s):  
Ulrich Engelke ◽  
Andreas Duenser ◽  
Anthony Zeater

Selective attention is an important cognitive resource to account for when designing effective human-machine interaction and cognitive computing systems. Much of our knowledge about attention processing stems from search tasks that are usually framed around Treisman's feature integration theory and Wolfe's Guided Search. However, search performance in these tasks has mainly been investigated using an overt attention paradigm. Covert attention on the other hand has hardly been investigated in this context. To gain a more thorough understanding of human attentional processing and especially covert search performance, the authors have experimentally investigated the relationship between overt and covert visual search for targets under a variety of target/distractor combinations. The overt search results presented in this work agree well with the Guided Search studies by Wolfe et al. The authors show that the response times are considerably more influenced by the target/distractor combination than by the attentional search paradigm deployed. While response times are similar between the overt and covert search conditions, they found that error rates are considerably higher in covert search. They further show that response times between participants are stronger correlated as the search task complexity increases. The authors discuss their findings and put them into the context of earlier research on visual search.

Author(s):  
Bartholomew Elias

Since the auditory system is not spatially restricted like the visual system, spatial auditory cues can provide information regarding object position, velocity, and trajectory beyond the field of view. A laboratory experiment was conducted to demonstrate that visual displays can be augmented with dynamic spatial auditory cues that provide information regarding the motion characteristics of unseen objects. In this study, dynamic spatial auditory cues presented through headphones conveyed preview information regarding target position, velocity, and trajectory beyond the field of view in a dynamic visual search task in which subjects acquired and identified moving visual targets that traversed a display cluttered with varying numbers of moving distractors. The provision of spatial auditory preview significantly reduced response times to acquire and identify the visual targets and significantly reduced error rates, especially in cases when the visual display load was high. These findings demonstrate that providing dynamic spatial auditory preview cues is a viable mechanism for augmenting visual search performance in dynamic task environments.


2020 ◽  
Author(s):  
Joseph MacInnes ◽  
Ómar I. Jóhannesson ◽  
Andrey Chetverikov ◽  
Arni Kristjansson

We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze contingent display with a second task where a similarly sized contingent window is controlled with a mouse allowing a covert aperture to be controlled independently from overt gaze. Larger apertures improved performance for both mouse and gaze contingent trials suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse controlled aperture independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.


Vision ◽  
2019 ◽  
Vol 3 (3) ◽  
pp. 46
Author(s):  
Alasdair D. F. Clarke ◽  
Anna Nowakowska ◽  
Amelia R. Hunt

Visual search is a popular tool for studying a range of questions about perception and attention, thanks to the ease with which the basic paradigm can be controlled and manipulated. While often thought of as a sub-field of vision science, search tasks are significantly more complex than most other perceptual tasks, with strategy and decision playing an essential, but neglected, role. In this review, we briefly describe some of the important theoretical advances about perception and attention that have been gained from studying visual search within the signal detection and guided search frameworks. Under most circumstances, search also involves executing a series of eye movements. We argue that understanding the contribution of biases, routines and strategies to visual search performance over multiple fixations will lead to new insights about these decision-related processes and how they interact with perception and attention. We also highlight the neglected potential for variability, both within and between searchers, to contribute to our understanding of visual search. The exciting challenge will be to account for variations in search performance caused by these numerous factors and their interactions. We conclude the review with some recommendations for ways future research can tackle these challenges to move the field forward.


Author(s):  
Megan H. Papesh ◽  
Michael C. Hout ◽  
Juan D. Guevara Pinto ◽  
Arryn Robbins ◽  
Alexis Lopez

AbstractDomain-specific expertise changes the way people perceive, process, and remember information from that domain. This is often observed in visual domains involving skilled searches, such as athletics referees, or professional visual searchers (e.g., security and medical screeners). Although existing research has compared expert to novice performance in visual search, little work has directly documented how accumulating experiences change behavior. A longitudinal approach to studying visual search performance may permit a finer-grained understanding of experience-dependent changes in visual scanning, and the extent to which various cognitive processes are affected by experience. In this study, participants acquired experience by taking part in many experimental sessions over the course of an academic semester. Searchers looked for 20 categories of targets simultaneously (which appeared with unequal frequency), in displays with 0–3 targets present, while having their eye movements recorded. With experience, accuracy increased and response times decreased. Fixation probabilities and durations decreased with increasing experience, but saccade amplitudes and visual span increased. These findings suggest that the behavioral benefits endowed by expertise emerge from oculomotor behaviors that reflect enhanced reliance on memory to guide attention and the ability to process more of the visual field within individual fixations.


Perception ◽  
10.1068/p2933 ◽  
2000 ◽  
Vol 29 (2) ◽  
pp. 241-250 ◽  
Author(s):  
Jiye Shen ◽  
Eyal M Reingold ◽  
Marc Pomplun

We examined the flexibility of guidance in a conjunctive search task by manipulating the ratios between different types of distractors. Participants were asked to decide whether a target was present or absent among distractors sharing either colour or shape. Results indicated a strong effect of distractor ratio on search performance. Shorter latency to move, faster manual response, and fewer fixations per trial were observed at extreme distractor ratios. The distribution of saccadic endpoints also varied flexibly as a function of distractor ratio. When there were very few same-colour distractors, the saccadic selectivity was biased towards the colour dimension. In contrast, when most of the distractors shared colour with the target, the saccadic selectivity was biased towards the shape dimension. Results are discussed within the framework of the guided search model.


2019 ◽  
Vol 121 (4) ◽  
pp. 1300-1314 ◽  
Author(s):  
Mathieu Servant ◽  
Gabriel Tillman ◽  
Jeffrey D. Schall ◽  
Gordon D. Logan ◽  
Thomas J. Palmeri

Stochastic accumulator models account for response times and errors in perceptual decision making by assuming a noisy accumulation of perceptual evidence to a threshold. Previously, we explained saccade visual search decision making by macaque monkeys with a stochastic multiaccumulator model in which accumulation was driven by a gated feed-forward integration to threshold of spike trains from visually responsive neurons in frontal eye field that signal stimulus salience. This neurally constrained model quantitatively accounted for response times and errors in visual search for a target among varying numbers of distractors and replicated the dynamics of presaccadic movement neurons hypothesized to instantiate evidence accumulation. This modeling framework suggested strategic control over gate or over threshold as two potential mechanisms to accomplish speed-accuracy tradeoff (SAT). Here, we show that our gated accumulator model framework can account for visual search performance under SAT instructions observed in a milestone neurophysiological study of frontal eye field. This framework captured key elements of saccade search performance, through observed modulations of neural input, as well as flexible combinations of gate and threshold parameters necessary to explain differences in SAT strategy across monkeys. However, the trajectories of the model accumulators deviated from the dynamics of most presaccadic movement neurons. These findings demonstrate that traditional theoretical accounts of SAT are incomplete descriptions of the underlying neural adjustments that accomplish SAT, offer a novel mechanistic account of decision-making mechanisms during speed-accuracy tradeoff, and highlight questions regarding the identity of model and neural accumulators. NEW & NOTEWORTHY A gated accumulator model is used to elucidate neurocomputational mechanisms of speed-accuracy tradeoff. Whereas canonical stochastic accumulators adjust strategy only through variation of an accumulation threshold, we demonstrate that strategic adjustments are accomplished by flexible combinations of both modulation of the evidence representation and adaptation of accumulator gate and threshold. The results indicate how model-based cognitive neuroscience can translate between abstract cognitive models of performance and neural mechanisms of speed-accuracy tradeoff.


Vision ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 28
Author(s):  
W. Joseph MacInnes ◽  
Ómar I. Jóhannesson ◽  
Andrey Chetverikov ◽  
Árni Kristjánsson

We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze-contingent display, with a second task where a similarly sized contingent window is controlled with a mouse, allowing a covert aperture to be controlled independently by overt gaze. Larger apertures improved performance for both the mouse- and gaze-contingent trials, suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse-controlled aperture somewhat independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.


2019 ◽  
Vol 31 (1) ◽  
pp. 31-42
Author(s):  
Jeff Moher

Task-irrelevant objects can sometimes capture attention and increase the time it takes an observer to find a target. However, less is known about how these distractors impact visual search strategies. Here, I found that salient distractors reduced rather than increased response times on target-absent trials (Experiment 1; N = 200). Combined with higher error rates on target-present trials, these results indicate that distractors can induce observers to quit search earlier than they otherwise would. These effects were replicated when target prevalence was low (Experiment 2; N = 200) and with different stimuli that elicited shallower search slopes (Experiment 3; N = 75). These results demonstrate that salient distractors can produce at least two consequences in visual search: They can capture attention, and they can cause observers to quit searching early. This novel finding has implications both for understanding visual attention and for examining distraction in real-world domains where targets are often absent, such as medical image screening.


1989 ◽  
Vol 33 (2) ◽  
pp. 91-95 ◽  
Author(s):  
Maxwell J. Wells ◽  
Michael Venturino

Ten subjects performed a task on a head-coupled simulator using various sized fields-of-view (FOVs). The task required them to visually acquire, remember the location of, monitor and shoot 3 or 6 objects. In addition they were required to perform a secondary tracking task. Performance at monitoring and shooting the objects decreased with decreasing FOV size and increasing number of objects. Secondary task performance also decreased with decreasing FOV. The ability to recall the location of objects was unaffected by changes in FOV size. However, tracking performance was degraded while subjects used smaller FOVS to find and learn the location of objects. The results indicate that although visual search performance can be maintained with small FOVs, it is done in a manner which may compromise performance at other tasks.


2019 ◽  
Author(s):  
Elizabeth J. Halfen ◽  
John F. Magnotti ◽  
Md. Shoaibur Rahman ◽  
Jeffrey M. Yau

AbstractAlthough we experience complex patterns over our entire body, how we selectively perceive multi-site touch over our bodies remains poorly understood. Here, we characterized tactile search behavior over the body using a tactile analog of the classic visual search task. Participants judged whether a target stimulus (e.g., 10-Hz vibration) was present or absent on the upper or lower limbs. When present, the target stimulus could occur alone or with distractor stimuli (e.g., 30-Hz vibrations) on other body locations. We varied the number and spatial configurations of the distractors as well as the target and distractor frequencies and measured the impact of these factors on search response times. First, we found that response times were faster on target-present trials compared to target-absent trials. Second, response times increased with the number of stimulated sites, suggesting a serial search process. Third, search performance differed depending on stimulus frequencies. This frequency-dependent behavior may be related to perceptual grouping effects based on timing cues. We constructed models to explore how the locations of the tactile cues influenced search behavior. Our modeling results reveal that, in isolation, cues on the index fingers make relatively greater contributions to search performance compared to stimulation experienced on other body sites. Additionally, co-stimulation of sites within the same limb or simply on the same body side preferentially influence search behavior. Our collective findings identify some principles of attentional search that are common to vision and touch, but others that highlight key differences that may be unique to body-based spatial perception.New & NoteworthyLittle is known about how we selectively experience multi-site touch over the body. Using a tactile analog of the classic visual search paradigm, we show that tactile search behavior for flutter cues is generally consistent with a serial search process. Modeling results reveal the preferential contributions of index finger stimulation and two-site interactions involving ipsilateral and within-limb patterns. Our results offer initial evidence for spatial and temporal principles underlying tactile search behavior over the body.


Sign in / Sign up

Export Citation Format

Share Document