scholarly journals Human Visual Search Follows Suboptimal Bayesian Strategy Revealed by a Spatiotemporal Computational Model

2019 ◽  
Author(s):  
Yunhui Zhou ◽  
Yuguo Yu

AbstractHumans perform sequences of eye movements to search for a target in complex environment, but the efficiency of human search strategy is still controversial. Previous studies showed that humans can optimally integrate information across fixations and determine the next fixation location. However, their models ignored the temporal control of eye movement, ignored the limited human memory capacity, and the model prediction did not agree with details of human eye movement metrics well. Here, we measured the temporal course of human visibility map and recorded the eye movements of human subjects performing a visual search task. We further built a continuous-time eye movement model which considered saccadic inaccuracy, saccadic bias, and memory constraints in the visual system. This model agreed with many spatial and temporal properties of human eye movements, and showed several similar statistical dependencies between successive eye movements. In addition, our model also predicted that the human saccade decision is shaped by a memory capacity of around 8 recent fixations. These results suggest that human visual search strategy is not strictly optimal in the sense of fully utilizing the visibility map, but instead tries to balance between search performance and the costs to perform the task.Author SummaryDuring visual search, how do humans determine when and where to make eye movement is an important unsolved issue. Previous studies suggested that human can optimally use the visibility map to determine fixation locations, but we found that such model didn’t agree with details of human eye movement metrics because it ignored several realistic biological limitations of human brain functions, and couldn’t explain the temporal control of eye movements. Instead, we showed that considering the temporal course of visual processing and several constrains of the visual system could greatly improve the prediction on the spatiotemporal properties of human eye movement while only slightly affected the search performance in terms of median fixation numbers. Therefore, humans may not use the visibility map in a strictly optimal sense, but tried to balance between search performance and the costs to perform the task.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Yunhui Zhou ◽  
Yuguo Yu

AbstractThere is conflicting evidence regarding whether humans can make spatially optimal eye movements during visual search. Some studies have shown that humans can optimally integrate information across fixations and determine the next fixation location, however, these models have generally ignored the control of fixation duration and memory limitation, and the model results do not agree well with the details of human eye movement metrics. Here, we measured the temporal course of the human visibility map and performed a visual search experiment. We further built a continuous-time eye movement model that considers saccadic inaccuracy, saccadic bias, and memory constraints. We show that this model agrees better with the spatial and temporal properties of human eye movements and predict that humans have a memory capacity of around eight previous fixations. The model results reveal that humans employ a suboptimal eye movement strategy to find a target, which may minimize costs while still achieving sufficiently high search performance.



2019 ◽  
Author(s):  
Michelle Ramey ◽  
Andrew P. Yonelinas ◽  
John M. Henderson

A hotly debated question is whether memory influences attention through conscious or unconscious processes. To address this controversy, we measured eye movements while participants searched repeated real-world scenes for embedded targets, and we assessed memory for each scene using confidence-based methods to isolate different states of subjective memory awareness. We found that memory-informed eye movements during visual search were predicted both by conscious recollection, which led to a highly precise first eye movement toward the remembered location, and by unconscious memory, which increased search efficiency by gradually directing the eyes toward the target throughout the search trial. In contrast, these eye movement measures were not influenced by familiarity-based memory (i.e., changes in subjective reports of memory strength). The results indicate that conscious recollection and unconscious memory can each play distinct and complementary roles in guiding attention to facilitate efficient extraction of visual information.



Author(s):  
Rachel J. Cunio ◽  
David Dommett ◽  
Joseph Houpt

Maintaining spatial awareness is a primary concern for operators, but relying only on visual displays can cause visual system overload and lead to performance decrements. Our study examined the benefits of providing spatialized auditory cues for maintaining visual awareness as a method of combating visual system overload. We examined visual search performance of seven participants in an immersive, dynamic (moving), three-dimensional, virtual reality environment both with no cues, non-masked, spatialized auditory cues, and masked, spatialized auditory cues. Results indicated a significant reduction in visual search time from the no-cue condition when either auditory cue type was presented, with the masked auditory condition slower. The results of this study can inform attempts to improve visual search performance in operational environments, such as determining appropriate display types for providing spatial information.



2019 ◽  
Vol 116 (6) ◽  
pp. 2027-2032 ◽  
Author(s):  
Jasper H. Fabius ◽  
Alessio Fracasso ◽  
Tanja C. W. Nijboer ◽  
Stefan Van der Stigchel

Humans move their eyes several times per second, yet we perceive the outside world as continuous despite the sudden disruptions created by each eye movement. To date, the mechanism that the brain employs to achieve visual continuity across eye movements remains unclear. While it has been proposed that the oculomotor system quickly updates and informs the visual system about the upcoming eye movement, behavioral studies investigating the time course of this updating suggest the involvement of a slow mechanism, estimated to take more than 500 ms to operate effectively. This is a surprisingly slow estimate, because both the visual system and the oculomotor system process information faster. If spatiotopic updating is indeed this slow, it cannot contribute to perceptual continuity, because it is outside the temporal regime of typical oculomotor behavior. Here, we argue that the behavioral paradigms that have been used previously are suboptimal to measure the speed of spatiotopic updating. In this study, we used a fast gaze-contingent paradigm, using high phi as a continuous stimulus across eye movements. We observed fast spatiotopic updating within 150 ms after stimulus onset. The results suggest the involvement of a fast updating mechanism that predictively influences visual perception after an eye movement. The temporal characteristics of this mechanism are compatible with the rate at which saccadic eye movements are typically observed in natural viewing.



2019 ◽  
Vol 18 (03) ◽  
pp. 1950012 ◽  
Author(s):  
Hedieh Alipour ◽  
Farzad Towhidkhah ◽  
Sajad Jafari ◽  
Avinash Menon ◽  
Hamidreza Namazi

Human eye movement is a key concept in the field of vision science. It has already been established that human eye movement responds to external stimuli. Hence, investigating the reaction of the human eye movement to various types of external stimuli is important in this field. There have been many researches on human eye movement that were previously done, but this is the first study to show a relation between the complex structure of human eye movement and the complex structure of static visual stimulus. Fractal theory was implemented and we showed that the fractal dynamics of the human eye movement is related to the fractal structure of visual target as stimulus. The outcome of this research provides new platforms to scientists to further investigate on the relation between eye movement and other applied stimuli.



1993 ◽  
Vol 2 (1) ◽  
pp. 44-53 ◽  
Author(s):  
Kristinn R. Thorisson

The most common visual feedback technique in teleoperation is in the form of monoscopic video displays. As robotic autonomy increases and the human operator takes on the role of a supervisor, three-dimensional information is effectively presented by multiple, televised, two-dimensional (2-D) projections showing the same scene from different angles. To analyze how people go about using such segmented information for estimations about three-dimensional (3-D) space, 18 subjects were asked to determine the position of a stationary pointer in space; eye movements and reaction times (RTs) were recorded during a period when either two or three 2-D views were presented simultaneously, each showing the same scene from a different angle. The results revealed that subjects estimated 3-D space by using a simple algorithm of feature search. Eye movement analysis supported the conclusion that people can efficiently use multiple 2-D projections to make estimations about 3-D space without reconstructing the scene mentally in three dimensions. The major limiting factor on RT in such situations is the subjects' visual search performance, giving in this experiment a mean of 2270 msec (SD = 468; N = 18). This conclusion was supported by predictions of the Model Human Processor (Card, Moran, & Newell, 1983), which predicted a mean RT of 1820 msec given the general eye movement patterns observed. Single-subject analysis of the experimental data suggested further that in some cases people may base their judgments on a more elaborate 3-D mental model reconstructed from the available 2-D views. In such situations, RTs and visual search patterns closely resemble those found in the mental rotation paradigm (Just & Carpenter, 1976), giving RTs in the range of 5-10 sec.



2021 ◽  
Vol 12 ◽  
Author(s):  
Miles Tallon ◽  
Mark W. Greenlee ◽  
Ernst Wagner ◽  
Katrin Rakoczy ◽  
Ulrich Frick

The results of two experiments are analyzed to find out how artistic expertise influences visual search. Experiment I comprised survey data of 1,065 students on self-reported visual memory skills and their ability to find three targets in four images of artwork. Experiment II comprised eye movement data of 50 Visual Literacy (VL) experts and non-experts whose eye movements during visual search were analyzed for nine images of artwork as an external validation of the assessment tasks performed in Sample I. No time constraint was set for completion of the visual search task. A latent profile analysis revealed four typical solution patterns for the students in Sample I, including a mainstream group, a group that completes easy images fast and difficult images slowly, a fast and erroneous group, and a slow working student group, depending on task completion time and on the probability of finding all three targets. Eidetic memory, performance in art education and visual imagination as self-reported visual skills have significant impact on latent class membership probability. We present a hidden Markov model (HMM) approach to uncover underlying regions of attraction that result from visual search eye-movement behavior in Experiment II. VL experts and non-experts did not significantly differ in task time and number of targets found but they did differ in their visual search process: compared to non-experts, experts showed greater precision in fixating specific prime and target regions, assessed through hidden state fixation overlap. Exploratory analysis of HMMs revealed differences between experts and non-experts in image locations of attraction (HMM states). Experts seem to focus their attention on smaller image parts whereas non-experts used wider parts of the image during their search. Differences between experts and non-experts depend on the relative saliency of targets embedded in images. HMMs can determine the effect of expertise on exploratory eye movements executed during visual search tasks. Further research on HMMs and art expertise is required to confirm exploratory results.



2018 ◽  
Vol 21 (3) ◽  
pp. 45-36
Author(s):  
A. K. Volkov ◽  
V. V. Ionov

The X-ray screening systems operators’ professional training is based on the CBT (computer-based training) principle, which has algorithms of adaptive training. These algorithms in existing computer simulators include feedback mechanisms on the basis of trainability exponents – such as the frequency of detecting dangerous objects, the frequency of false alarms and detection time. Further enhancement of the operators’ simulator training effectiveness is associated with the integration of psychophysiological mechanisms providing monitoring of their functional state. Based on the analysis of the particularities of x-ray screening systems operators’ professional training associated with the formation of competences in dangerous objects visual search, the most perspective method is the Eye tracking technology. Domestic and foreign studies of the eye movements characteristics while solving professional tasks in training process are actively developed in various areas. There are no studies of visual search peculiarities in domestic practice in contrast to exterior studies. This research is aimed at considering the usage of Eye tracking technology in the training of x-ray screening systems operators. As the result of the experimental research with the use of mobile eye-tracker Sensomotoric Instruments Eye Tracking Glasses 2.0 the statistical data of eye movement parameters of two groups of subjects with different levels of training have been received. The application of cluster and discriminant analyses methods allowed to identify General classes of these parameters, as well as to obtain the discriminants functions for each group under examination. The theoretical significance of the peculiarities of the operators’ eye movement studies is to identify the patterns of prohibited items visual search. The practical importance of implementation of Eye tracking technology and statistical analysis methods is to increase the reliability of assessment the level of formed competence of x-ray screening systems’ operators in visual search, as well as to develop the potential system of operators’ state monitoring and assessing their visual fatigue.



Vision ◽  
2019 ◽  
Vol 3 (3) ◽  
pp. 46
Author(s):  
Alasdair D. F. Clarke ◽  
Anna Nowakowska ◽  
Amelia R. Hunt

Visual search is a popular tool for studying a range of questions about perception and attention, thanks to the ease with which the basic paradigm can be controlled and manipulated. While often thought of as a sub-field of vision science, search tasks are significantly more complex than most other perceptual tasks, with strategy and decision playing an essential, but neglected, role. In this review, we briefly describe some of the important theoretical advances about perception and attention that have been gained from studying visual search within the signal detection and guided search frameworks. Under most circumstances, search also involves executing a series of eye movements. We argue that understanding the contribution of biases, routines and strategies to visual search performance over multiple fixations will lead to new insights about these decision-related processes and how they interact with perception and attention. We also highlight the neglected potential for variability, both within and between searchers, to contribute to our understanding of visual search. The exciting challenge will be to account for variations in search performance caused by these numerous factors and their interactions. We conclude the review with some recommendations for ways future research can tackle these challenges to move the field forward.



Author(s):  
Karl F. Van Orden ◽  
Joseph DiVita

Previous research has demonstrated that search times are reduced when flicker is used to highlight color coded symbols, but that flicker is not distracting when subjects must search for non-highlighted symbols. This prompted an examination of flicker and other stimulus dimensions in a conjunctive search paradigm. In all experiments, at least 15 subjects completed a minimum of 330 trials in which they indicated the presence or absence of target stimuli on a CRT display that contained either 8, 16 or 32 items. In Experiment 1, subjects searched for blue-steady or red-flickering (5.6 Hz) circular targets among blue-flickering and red-steady distractors. Blue-steady targets produced a more efficient search rate (11.6 msec/item) than red-flickering targets (19.3 msec/item). In Experiment 2, a conjunction of flicker and size (large and small filled circles) yielded the opposite results; the search performance for large-flickering targets was unequivocally parallel. In Experiment 3, conjunctions of form and flicker yielded highly serial search performance. The findings are consistent with the response properties of parvo and magnocellular channels of the early visual system, and suggest that search is most efficient when one of these channels can be filtered completely.



Sign in / Sign up

Export Citation Format

Share Document