Eye Movements in Search and Target Acquisition

Author(s):  
Deborah P. Birkmire ◽  
Robert Karsh ◽  
B. Diane Barnette ◽  
Ramakrishna Pillalamarri ◽  
Samantha DiBastiani

The frequency distribution of eye fixations and fixation durations during a search and target acquisition task was examined to determine if the allocation of visual attention was related to target, scene, and/or observer characteristics. Ninety computer-generated scenes simulating infrared imagery and containing different levels of clutter and zero, one, two, or three targets were produced. Targets were embedded in these scenes counterbalancing for range and position. Global and local clutter were measured using both statistical variance and probability of edge metrics. Thirty-three aviators, tankers, and infantry soldiers were shown still video images of the 90 scenes and were instructed to search for targets. Results of multiple regression analyses of global clutter, local clutter, range, number of targets, target dimensions, target complexity, and group membershi on eye fixations and fixation durations are given and discussed in terms of search strategies.

1992 ◽  
Vol 36 (18) ◽  
pp. 1425-1429 ◽  
Author(s):  
Deborah P. Birkmire ◽  
Robert Karsh ◽  
B. Diane Barnette ◽  
Ramakrishna Pillalamarri

The relationship of human target acquisition times and detection probabilities to electronically measured visual clutter was investigated. Ninety computer-generated scenes simulating infrared imagery and containing different levels of clutter and zero, one, two, or three targets were produced. Targets were embedded in these scenes counterbalancing for range and position. Global and local clutter were measured using both statistical variance and probability of edge metrics. Thirty-three aviators, tankers, and infantry soldiers were shown still-video images of the 90 scenes and were instructed to search for targets. Analyses indicate differences between the aviators and tankers in search times and types of errors. Results of multiple regression analyses of global clutter, local clutter, range, target dimension, target complexity, number of targets, and experience on search times are given and discussed in terms search strategies.


2010 ◽  
Vol 3 (2) ◽  
Author(s):  
Thomas Couronné ◽  
Anne Guérin-Dugué ◽  
Michel Dubois ◽  
Pauline Faye ◽  
Christian Marendaz

When people gaze at real scenes, their visual attention is driven both by a set of bottom-up processes coming from the signal properties of the scene and also from top-down effects such as the task, the affective state, prior knowledge, or the semantic context. The context of this study is an assessment of manufactured objects (here car cab interior). From this dedicated context, this work describes a set of methods to analyze the eye-movements during the visual scene evaluation. But these methods can be adapted to more general contexts. We define a statistical model to explain the eye fixations measured experimentally by eye-tracking even when the ratio signal/noise is bad or lacking of raw data. One of the novelties of the approach is to use complementary experimental data obtained with the “Bubbles” paradigm. The proposed model is an additive mixture of several a priori spatial density distributions of factors guiding visual attention. The “Bubbles” paradigm is adapted here to reveal the semantic density distribution which represents here the cumulative effects of the top-down factors. Then, the contribution of each factor is compared depending on the product and on the task, in order to highlight the properties of the visual attention and the cognitive activity in each situation.


Perception ◽  
10.1068/p5052 ◽  
2003 ◽  
Vol 32 (6) ◽  
pp. 681-698 ◽  
Author(s):  
Junji Ito ◽  
Andrey R Nikolaev ◽  
Marjolein Luman ◽  
Maartje F Aukes ◽  
Chie Nakatani ◽  
...  

According to a widely cited finding by Ellis and Stark (1978 Perception7 575–581), the duration of eye fixations is longer at the instant of perceptual reversal of an ambiguous figure than before or after the reversal. However, long fixations are more likely to include samples of an independent random event than are short fixations. This sampling bias would produce the pattern of results also when no correlation exists between fixation duration and perceptual reversals. When an appropriate correction is applied to the measurement of fixation durations, the effect disappears. In fact, there are fewer actual button-presses during the long intervals than would be expected by chance. Moving-window analyses performed on eye-fixation data reveal that no unique eye event is associated with switching behaviour. However, several indicators, such as blink frequency, saccade frequency, and the direction of the saccade, are each differentially sensitive to perceptual and response-related aspects of the switching process. The time course of these indicators depicts switching behaviour as a process of cascaded stages.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


2019 ◽  
Vol 72 (7) ◽  
pp. 1863-1875 ◽  
Author(s):  
Martin R Vasilev ◽  
Fabrice BR Parmentier ◽  
Bernhard Angele ◽  
Julie A Kirkby

Oddball studies have shown that sounds unexpectedly deviating from an otherwise repeated sequence capture attention away from the task at hand. While such distraction is typically regarded as potentially important in everyday life, previous work has so far not examined how deviant sounds affect performance on more complex daily tasks. In this study, we developed a new method to examine whether deviant sounds can disrupt reading performance by recording participants’ eye movements. Participants read single sentences in silence and while listening to task-irrelevant sounds. In the latter condition, a 50-ms sound was played contingent on the fixation of five target words in the sentence. On most occasions, the same tone was presented (standard sound), whereas on rare and unexpected occasions it was replaced by white noise (deviant sound). The deviant sound resulted in significantly longer fixation durations on the target words relative to the standard sound. A time-course analysis showed that the deviant sound began to affect fixation durations around 180 ms after fixation onset. Furthermore, deviance distraction was not modulated by the lexical frequency of target words. In summary, fixation durations on the target words were longer immediately after the presentation of the deviant sound, but there was no evidence that it interfered with the lexical processing of these words. The present results are in line with the recent proposition that deviant sounds yield a temporary motor suppression and suggest that deviant sounds likely inhibit the programming of the next saccade.


2013 ◽  
Vol 368 (1628) ◽  
pp. 20130056 ◽  
Author(s):  
Matteo Toscani ◽  
Matteo Valsecchi ◽  
Karl R. Gegenfurtner

When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance.


Autism ◽  
2019 ◽  
Vol 24 (3) ◽  
pp. 730-743 ◽  
Author(s):  
Emma Gowen ◽  
Andrius Vabalas ◽  
Alexander J Casson ◽  
Ellen Poliakoff

This study investigated whether reduced visual attention to an observed action might account for altered imitation in autistic adults. A total of 22 autistic and 22 non-autistic adults observed and then imitated videos of a hand producing sequences of movements that differed in vertical elevation while their hand and eye movements were recorded. Participants first performed a block of imitation trials with general instructions to imitate the action. They then performed a second block with explicit instructions to attend closely to the characteristics of the movement. Imitation was quantified according to how much participants modulated their movement between the different heights of the observed movements. In the general instruction condition, the autistic group modulated their movements significantly less compared to the non-autistic group. However, following instructions to attend to the movement, the autistic group showed equivalent imitation modulation to the non-autistic group. Eye movement recording showed that the autistic group spent significantly less time looking at the hand movement for both instruction conditions. These findings show that visual attention contributes to altered voluntary imitation in autistic individuals and have implications for therapies involving imitation as well as for autistic people’s ability to understand the actions of others.


2020 ◽  
Author(s):  
Šimon Kucharský ◽  
Daan Roelof van Renswoude ◽  
Maartje Eusebia Josefa Raijmakers ◽  
Ingmar Visser

Describing, analyzing and explaining patterns in eye movement behavior is crucial for understanding visual perception. Further, eye movements are increasingly used in informing cognitive process models. In this article, we start by reviewing basic characteristics and desiderata for models of eye movements. Specifically, we argue that there is a need for models combining spatial and temporal aspects of eye-tracking data (i.e., fixation durations and fixation locations), that formal models derived from concrete theoretical assumptions are needed to inform our empirical research, and custom statistical models are useful for detecting specific empirical phenomena that are to be explained by said theory. In this article, we develop a conceptual model of eye movements, or specifically, fixation durations and fixation locations, and from it derive a formal statistical model --- meeting our goal of crafting a model useful in both the theoretical and empirical research cycle. We demonstrate the use of the model on an example of infant natural scene viewing, to show that the model is able to explain different features of the eye movement data, and to showcase how to identify that the model needs to be adapted if it does not agree with the data. We conclude with discussion of potential future avenues for formal eye movement models.


Sign in / Sign up

Export Citation Format

Share Document