scholarly journals Eye movements reflect expertise development in hybrid search

Author(s):  
Megan H. Papesh ◽  
Michael C. Hout ◽  
Juan D. Guevara Pinto ◽  
Arryn Robbins ◽  
Alexis Lopez

AbstractDomain-specific expertise changes the way people perceive, process, and remember information from that domain. This is often observed in visual domains involving skilled searches, such as athletics referees, or professional visual searchers (e.g., security and medical screeners). Although existing research has compared expert to novice performance in visual search, little work has directly documented how accumulating experiences change behavior. A longitudinal approach to studying visual search performance may permit a finer-grained understanding of experience-dependent changes in visual scanning, and the extent to which various cognitive processes are affected by experience. In this study, participants acquired experience by taking part in many experimental sessions over the course of an academic semester. Searchers looked for 20 categories of targets simultaneously (which appeared with unequal frequency), in displays with 0–3 targets present, while having their eye movements recorded. With experience, accuracy increased and response times decreased. Fixation probabilities and durations decreased with increasing experience, but saccade amplitudes and visual span increased. These findings suggest that the behavioral benefits endowed by expertise emerge from oculomotor behaviors that reflect enhanced reliance on memory to guide attention and the ability to process more of the visual field within individual fixations.

Vision ◽  
2019 ◽  
Vol 3 (3) ◽  
pp. 46
Author(s):  
Alasdair D. F. Clarke ◽  
Anna Nowakowska ◽  
Amelia R. Hunt

Visual search is a popular tool for studying a range of questions about perception and attention, thanks to the ease with which the basic paradigm can be controlled and manipulated. While often thought of as a sub-field of vision science, search tasks are significantly more complex than most other perceptual tasks, with strategy and decision playing an essential, but neglected, role. In this review, we briefly describe some of the important theoretical advances about perception and attention that have been gained from studying visual search within the signal detection and guided search frameworks. Under most circumstances, search also involves executing a series of eye movements. We argue that understanding the contribution of biases, routines and strategies to visual search performance over multiple fixations will lead to new insights about these decision-related processes and how they interact with perception and attention. We also highlight the neglected potential for variability, both within and between searchers, to contribute to our understanding of visual search. The exciting challenge will be to account for variations in search performance caused by these numerous factors and their interactions. We conclude the review with some recommendations for ways future research can tackle these challenges to move the field forward.


Author(s):  
Samia Hussein

The present study examined the effect of scene context on guidance of attention during visual search in real‐world scenes. Prior research has demonstrated that when searching for an object, attention is usually guided to the region of a scene that most likely contains that target object. This study examined two possible mechanisms of attention that underlie efficient search: enhancement of attention (facilitation) and a deficiency of attention (inhibition). In this study, participants (N=20) were shown an object name and then required to search through scenes for the target while their eye movements were tracked. Scenes were divided into target‐relevant contextual regions (upper, middle, lower) and participants searched repeatedly in the same scene for different targets either in the same region or in different regions. Comparing repeated searches within the same scene across different regions, we expect to find that visual search is faster and more efficient (facilitation of attention) in regions of a scene where attention was previously deployed. At the same time, when searching across different regions, we expect searches to be slower and less efficient (inhibition of attention) because those regions were previously ignored. Results from this study help to better understand how mechanisms of visual attention operate within scene contexts during visual search. 


Author(s):  
Min-Ju Liao ◽  
Stacy Granada ◽  
Walter W. Johnson

Several experiments were conducted to examine the effect of brightness highlighting on search of a target aircraft among distractor aircraft within a cockpit display of traffic information (CDTI). The present experiment partially replicated the design of one of these experiments, adding an examination of eye movements. The display presented homogenous all bright, all dim, or mixed bright and dim aircraft. Within the mixed display, target aircraft were non-predictive and either bright or dim. Results showed that with the mixed display, participants yielded slower detection times, exhibited more eye fixations, and searched with longer paths, compared to the homogenous all bright or dim displays. The duration of the fixation and the speed of eye movements did not show any difference between the homogeneous and mixed displays. The present detection time analysis did not replicate previous experimental results and this is likely due to the fewer trials given in the current experiment. The present results demonstrated how using highlighting to segregate information domains may impose costs on visual search performance in the early stages of a search task.


Author(s):  
Ulrich Engelke ◽  
Andreas Duenser ◽  
Anthony Zeater

Selective attention is an important cognitive resource to account for when designing effective human-machine interaction and cognitive computing systems. Much of our knowledge about attention processing stems from search tasks that are usually framed around Treisman's feature integration theory and Wolfe's Guided Search. However, search performance in these tasks has mainly been investigated using an overt attention paradigm. Covert attention on the other hand has hardly been investigated in this context. To gain a more thorough understanding of human attentional processing and especially covert search performance, the authors have experimentally investigated the relationship between overt and covert visual search for targets under a variety of target/distractor combinations. The overt search results presented in this work agree well with the Guided Search studies by Wolfe et al. The authors show that the response times are considerably more influenced by the target/distractor combination than by the attentional search paradigm deployed. While response times are similar between the overt and covert search conditions, they found that error rates are considerably higher in covert search. They further show that response times between participants are stronger correlated as the search task complexity increases. The authors discuss their findings and put them into the context of earlier research on visual search.


Author(s):  
Nathan Messmer ◽  
Nathan Leggett ◽  
Melissa Prince ◽  
Jason S. McCarley

Gaze linking allows team members in a collaborative visual task to scan separate computer monitors simultaneously while their eye movements are tracked and projected onto each other’s displays. The present study explored the benefits of gaze linking to performance in unguided and guided visual search tasks. Participants completed either an unguided or guided serial search task as both independent and gaze-linked searchers. Although it produced shorter mean response times than independent search, gaze linked search was highly inefficient, and gaze linking did not differentially affect performance in guided and unguided groups. Results suggest that gaze linking is likely to be of little value in improving applied visual search.


Perception ◽  
10.1068/p2933 ◽  
2000 ◽  
Vol 29 (2) ◽  
pp. 241-250 ◽  
Author(s):  
Jiye Shen ◽  
Eyal M Reingold ◽  
Marc Pomplun

We examined the flexibility of guidance in a conjunctive search task by manipulating the ratios between different types of distractors. Participants were asked to decide whether a target was present or absent among distractors sharing either colour or shape. Results indicated a strong effect of distractor ratio on search performance. Shorter latency to move, faster manual response, and fewer fixations per trial were observed at extreme distractor ratios. The distribution of saccadic endpoints also varied flexibly as a function of distractor ratio. When there were very few same-colour distractors, the saccadic selectivity was biased towards the colour dimension. In contrast, when most of the distractors shared colour with the target, the saccadic selectivity was biased towards the shape dimension. Results are discussed within the framework of the guided search model.


2020 ◽  
Author(s):  
Han Zhang

Mind-wandering (MW) is ubiquitous and is associated with reduced performance across a wide range of tasks. Recent studies have shown that MW can be related to changes in gaze parameters. In this dissertation, I explored the link between eye movements and MW in three different contexts that involve complex cognitive processing: visual search, scene perception, and reading comprehension. Study 1 examined how MW affects visual search performance, particularly the ability to suppress salient but irrelevant distractors during visual search. Study 2 used a scene encoding task to study how MW affects how eye movements change over time and their relationship with scene content. Study 3 examined how MW affects readers’ ability to detect semantic incongruities in the text and make necessary revisions of their understanding as they read jokes. All three studies showed that MW was associated with decreased task performance at the behavioral level (e.g., response time, recognition, and recall). Eye-tracking further showed that these behavioral costs can be traced to deficits in specific cognitive processes. The final chapter of this dissertation explored whether there are context-independent eye movement features of MW. MW manifests itself in different ways depending on task characteristics. In tasks that require extensive sampling of the stimuli (e.g., reading and scene viewing), MW was related to a global reduction in visual processing. But this was not the case for the search task, which involved speeded, simple visual processing. MW was instead related to increased looking time on the target after it was already located. MW affects the coupling between cognitive efforts and task demands, but the nature of this decoupling depends on the specific features of particular tasks.


2019 ◽  
Vol 121 (4) ◽  
pp. 1300-1314 ◽  
Author(s):  
Mathieu Servant ◽  
Gabriel Tillman ◽  
Jeffrey D. Schall ◽  
Gordon D. Logan ◽  
Thomas J. Palmeri

Stochastic accumulator models account for response times and errors in perceptual decision making by assuming a noisy accumulation of perceptual evidence to a threshold. Previously, we explained saccade visual search decision making by macaque monkeys with a stochastic multiaccumulator model in which accumulation was driven by a gated feed-forward integration to threshold of spike trains from visually responsive neurons in frontal eye field that signal stimulus salience. This neurally constrained model quantitatively accounted for response times and errors in visual search for a target among varying numbers of distractors and replicated the dynamics of presaccadic movement neurons hypothesized to instantiate evidence accumulation. This modeling framework suggested strategic control over gate or over threshold as two potential mechanisms to accomplish speed-accuracy tradeoff (SAT). Here, we show that our gated accumulator model framework can account for visual search performance under SAT instructions observed in a milestone neurophysiological study of frontal eye field. This framework captured key elements of saccade search performance, through observed modulations of neural input, as well as flexible combinations of gate and threshold parameters necessary to explain differences in SAT strategy across monkeys. However, the trajectories of the model accumulators deviated from the dynamics of most presaccadic movement neurons. These findings demonstrate that traditional theoretical accounts of SAT are incomplete descriptions of the underlying neural adjustments that accomplish SAT, offer a novel mechanistic account of decision-making mechanisms during speed-accuracy tradeoff, and highlight questions regarding the identity of model and neural accumulators. NEW & NOTEWORTHY A gated accumulator model is used to elucidate neurocomputational mechanisms of speed-accuracy tradeoff. Whereas canonical stochastic accumulators adjust strategy only through variation of an accumulation threshold, we demonstrate that strategic adjustments are accomplished by flexible combinations of both modulation of the evidence representation and adaptation of accumulator gate and threshold. The results indicate how model-based cognitive neuroscience can translate between abstract cognitive models of performance and neural mechanisms of speed-accuracy tradeoff.


2019 ◽  
Author(s):  
Yunhui Zhou ◽  
Yuguo Yu

AbstractHumans perform sequences of eye movements to search for a target in complex environment, but the efficiency of human search strategy is still controversial. Previous studies showed that humans can optimally integrate information across fixations and determine the next fixation location. However, their models ignored the temporal control of eye movement, ignored the limited human memory capacity, and the model prediction did not agree with details of human eye movement metrics well. Here, we measured the temporal course of human visibility map and recorded the eye movements of human subjects performing a visual search task. We further built a continuous-time eye movement model which considered saccadic inaccuracy, saccadic bias, and memory constraints in the visual system. This model agreed with many spatial and temporal properties of human eye movements, and showed several similar statistical dependencies between successive eye movements. In addition, our model also predicted that the human saccade decision is shaped by a memory capacity of around 8 recent fixations. These results suggest that human visual search strategy is not strictly optimal in the sense of fully utilizing the visibility map, but instead tries to balance between search performance and the costs to perform the task.Author SummaryDuring visual search, how do humans determine when and where to make eye movement is an important unsolved issue. Previous studies suggested that human can optimally use the visibility map to determine fixation locations, but we found that such model didn’t agree with details of human eye movement metrics because it ignored several realistic biological limitations of human brain functions, and couldn’t explain the temporal control of eye movements. Instead, we showed that considering the temporal course of visual processing and several constrains of the visual system could greatly improve the prediction on the spatiotemporal properties of human eye movement while only slightly affected the search performance in terms of median fixation numbers. Therefore, humans may not use the visibility map in a strictly optimal sense, but tried to balance between search performance and the costs to perform the task.


2021 ◽  
Author(s):  
Thomas L. Botch ◽  
Brenda D. Garcia ◽  
Yeo Bi Choi ◽  
Caroline E. Robertson

Visual search is a universal human activity in naturalistic environments. Traditionally, visual search is investigated under tightly controlled conditions, where head-restricted participants locate a minimalistic target in a cluttered array presented on a computer screen. Do classic findings of visual search extend to naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality (VR) technology to relate individual differences in classic visual search paradigms to naturalistic search behavior. In a naturalistic visual search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic visual search task, participants searched for a target within a simple array of colored letters using only eye-movements. We tested how set size, a property known to limit visual search within computer displays, predicts the efficiency of search behavior inside immersive, real-world scenes that vary in levels of visual clutter. We found that participants' search performance was impacted by the level of visual clutter within real-world scenes. Critically, we also observed that individual differences in visual search efficiency in classic search predicted efficiency in real-world search, but only when the comparison was limited to the forward-facing field of view for real-world search. These results demonstrate that set size is a reliable predictor of individual performance across computer-based and active, real-world visual search behavior.


Sign in / Sign up

Export Citation Format

Share Document