scholarly journals Imaging Time Series of Eye Tracking Data to Classify Attentional States

2021 ◽  
Vol 15 ◽  
Author(s):  
Lisa-Marie Vortmann ◽  
Jannes Knychalla ◽  
Sonja Annerer-Walcher ◽  
Mathias Benedek ◽  
Felix Putze

It has been shown that conclusions about the human mental state can be drawn from eye gaze behavior by several previous studies. For this reason, eye tracking recordings are suitable as input data for attentional state classifiers. In current state-of-the-art studies, the extracted eye tracking feature set usually consists of descriptive statistics about specific eye movement characteristics (i.e., fixations, saccades, blinks, vergence, and pupil dilation). We suggest an Imaging Time Series approach for eye tracking data followed by classification using a convolutional neural net to improve the classification accuracy. We compared multiple algorithms that used the one-dimensional statistical summary feature set as input with two different implementations of the newly suggested method for three different data sets that target different aspects of attention. The results show that our two-dimensional image features with the convolutional neural net outperform the classical classifiers for most analyses, especially regarding generalization over participants and tasks. We conclude that current attentional state classifiers that are based on eye tracking can be optimized by adjusting the feature set while requiring less feature engineering and our future work will focus on a more detailed and suited investigation of this approach for other scenarios and data sets.

PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251674
Author(s):  
Thomas A. Busey ◽  
Nicholas Heise ◽  
R. Austin Hicklin ◽  
Bradford T. Ulery ◽  
JoAnn Buscaglia

Latent fingerprint examiners sometimes come to different conclusions when comparing fingerprints, and eye-gaze behavior may help explain these outcomes. missed identifications (missed IDs) are inconclusive, exclusion, or No Value determinations reached when the consensus of other examiners is an identification. To determine the relation between examiner behavior and missed IDs, we collected eye-gaze data from 121 latent print examiners as they completed a total 1444 difficult (latent-exemplar) comparisons. We extracted metrics from the gaze data that serve as proxies for underlying perceptual and cognitive capacities. We used these metrics to characterize potential mechanisms of missed IDs: Cursory Comparison and Mislocalization. We find that missed IDs are associated with shorter comparison times, fewer regions visited, and fewer attempted correspondences between the compared images. Latent print comparisons resulting in erroneous exclusions (a subset of missed IDs) are also more likely to have fixations in different regions and less accurate correspondence attempts than those comparisons resulting in identifications. We also use our derived metrics to describe one atypical examiner who made six erroneous identifications, four of which were on comparisons intended to be straightforward exclusions. The present work helps identify the degree to which missed IDs can be explained using eye-gaze behavior, and the extent to which missed IDs depend on cognitive and decision-making factors outside the domain of eye-tracking methodologies.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2019 ◽  
Vol 25 (1) ◽  
pp. 87-97 ◽  
Author(s):  
Prithiviraj K. Muthumanickam ◽  
Katerina Vrotsou ◽  
Aida Nordman ◽  
Jimmy Johansson ◽  
Matthew Cooper

2019 ◽  
Vol 12 (6) ◽  
Author(s):  
Tanja Munz ◽  
Lewis L. Chuang ◽  
Sebastian Pannasch ◽  
Daniel Weiskopf

This work presents a visual analytics approach to explore microsaccade distributions in high-frequency eye tracking data. Research studies often apply filter algorithms and parameter values for microsaccade detection. Even when the same algorithms are employed, different parameter values might be adopted across different studies. In this paper, we present a visual analytics system (VisME) to promote reproducibility in the data analysis of microsaccades. It allows users to interactively vary the parametric values for microsaccade filters and evaluate the resulting influence on microsaccade behavior across individuals and on a group level. In particular, we exploit brushing-and-linking techniques that allow the microsaccadic properties of space, time, and movement direction to be extracted, visualized, and compared across multiple views. We demonstrate in a case study the use of our visual analytics system on data sets collected from natural scene viewing and show in a qualitative usability study the usefulness of this approach for eye tracking researchers. We believe that interactive tools such as VisME will promote greater transparency in eye movement research by providing researchers with the ability to easily understand complex eye tracking data sets; such tools can also serve as teaching systems. VisME is provided as open source software.


Psychometrika ◽  
2020 ◽  
Vol 85 (1) ◽  
pp. 154-184 ◽  
Author(s):  
Sun-Joo Cho ◽  
Sarah Brown-Schmidt ◽  
Paul De Boeck ◽  
Jianhong Shen

2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Zenghai Chen ◽  
Hong Fu ◽  
Wai-Lun Lo ◽  
Zheru Chi

Strabismus is one of the most common vision diseases that would cause amblyopia and even permanent vision loss. Timely diagnosis is crucial for well treating strabismus. In contrast to manual diagnosis, automatic recognition can significantly reduce labor cost and increase diagnosis efficiency. In this paper, we propose to recognize strabismus using eye-tracking data and convolutional neural networks. In particular, an eye tracker is first exploited to record a subject’s eye movements. A gaze deviation (GaDe) image is then proposed to characterize the subject’s eye-tracking data according to the accuracies of gaze points. The GaDe image is fed to a convolutional neural network (CNN) that has been trained on a large image database called ImageNet. The outputs of the full connection layers of the CNN are used as the GaDe image’s features for strabismus recognition. A dataset containing eye-tracking data of both strabismic subjects and normal subjects is established for experiments. Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by our proposed method.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8205
Author(s):  
Lisa-Marie Vortmann ◽  
Felix Putze

Statistical measurements of eye movement-specific properties, such as fixations, saccades, blinks, or pupil dilation, are frequently utilized as input features for machine learning algorithms applied to eye tracking recordings. These characteristics are intended to be interpretable aspects of eye gazing behavior. However, prior research has demonstrated that when trained on implicit representations of raw eye tracking data, neural networks outperform these traditional techniques. To leverage the strengths and information of both feature sets, we integrated implicit and explicit eye tracking features in one classification approach in this work. A neural network was adapted to process the heterogeneous input and predict the internally and externally directed attention of 154 participants. We compared the accuracies reached by the implicit and combined features for different window lengths and evaluated the approaches in terms of person- and task-independence. The results indicate that combining implicit and explicit feature extraction techniques for eye tracking data improves classification results for attentional state detection significantly. The attentional state was correctly classified during new tasks with an accuracy better than chance, and person-independent classification even outperformed person-dependently trained classifiers for some settings. For future experiments and applications that require eye tracking data classification, we suggest to consider implicit data representation in addition to interpretable explicit features.


Sign in / Sign up

Export Citation Format

Share Document