scholarly journals High-Accuracy Gaze Estimation for Interpolation-Based Eye-Tracking Methods

Vision ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 41
Author(s):  
Fabricio Batista Narcizo ◽  
Fernando Eustáquio Dantas dos Santos ◽  
Dan Witzner Hansen

This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. Our experiments show that the eye-camera location combined with the non-coplanarity of the eye plane deforms the eye feature distribution when the eye-camera is far from the eye’s optical axis. This paper proposes geometric transformation methods to reshape the eye feature distribution based on the virtual alignment of the eye-camera in the center of the eye’s optical axis. The data analysis uses eye-tracking data from a simulated environment and an experiment with 83 volunteer participants (55 males and 28 females). We evaluate the improvements achieved with the proposed methods using Gaussian analysis, which defines a range for high-accuracy gaze estimation between −0.5∘ and 0.5∘. Compared to traditional polynomial-based and homography-based gaze estimation methods, the proposed methods increase the number of gaze estimations in the high-accuracy range.

Vision ◽  
2018 ◽  
Vol 2 (3) ◽  
pp. 35 ◽  
Author(s):  
Braiden Brousseau ◽  
Jonathan Rose ◽  
Moshe Eizenman

The most accurate remote Point of Gaze (PoG) estimation methods that allow free head movements use infrared light sources and cameras together with gaze estimation models. Current gaze estimation models were developed for desktop eye-tracking systems and assume that the relative roll between the system and the subjects’ eyes (the ’R-Roll’) is roughly constant during use. This assumption is not true for hand-held mobile-device-based eye-tracking systems. We present an analysis that shows the accuracy of estimating the PoG on screens of hand-held mobile devices depends on the magnitude of the R-Roll angle and the angular offset between the visual and optical axes of the individual viewer. We also describe a new method to determine the PoG which compensates for the effects of R-Roll on the accuracy of the POG. Experimental results on a prototype infrared smartphone show that for an R-Roll angle of 90 ° , the new method achieves accuracy of approximately 1 ° , while a gaze estimation method that assumes that the R-Roll angle remains constant achieves an accuracy of 3.5 ° . The manner in which the experimental PoG estimation errors increase with the increase in the R-Roll angle was consistent with the analysis. The method presented in this paper can improve significantly the performance of eye-tracking systems on hand-held mobile-devices.


Author(s):  
Arantxa Villanueva ◽  
Rafael Cabeza ◽  
Javier San Agustin

The main objective of gaze trackers is to provide an accurate estimate of the user’s gaze by using the eye tracking information. Gaze, in its most general form, can be considered to be the line of sight or line of gaze, as 3D (imaginary) lines with respect to the camera, or as the point of regard (also termed the point of gaze). This chapter introduces different gaze estimation techniques, including geometry-based methods and interpolation methods. Issues related to both remote and head mounted trackers are discussed. Different fixation estimation methods are also briefly introduced. It is assumed that the reader is familiar with basic 3D geometry concepts as well as advanced mathematics, such as matrix manipulation and vector calculus.


2021 ◽  
pp. 1-16
Author(s):  
Leigha A. MacNeill ◽  
Xiaoxue Fu ◽  
Kristin A. Buss ◽  
Koraly Pérez-Edgar

Abstract Temperamental behavioral inhibition (BI) is a robust endophenotype for anxiety characterized by increased sensitivity to novelty. Controlling parenting can reinforce children's wariness by rewarding signs of distress. Fine-grained, dynamic measures are needed to better understand both how children perceive their parent's behaviors and the mechanisms supporting evident relations between parenting and socioemotional functioning. The current study examined dyadic attractor patterns (average mean durations) with state space grids, using children's attention patterns (captured via mobile eye tracking) and parental behavior (positive reinforcement, teaching, directives, intrusion), as functions of child BI and parent anxiety. Forty 5- to 7-year-old children and their primary caregivers completed a set of challenging puzzles, during which the child wore a head-mounted eye tracker. Child BI was positively correlated with proportion of parent's time spent teaching. Child age was negatively related, and parent anxiety level was positively related, to parent-focused/controlling parenting attractor strength. There was a significant interaction between parent anxiety level and child age predicting parent-focused/controlling parenting attractor strength. This study is a first step to examining the co-occurrence of parenting behavior and child attention in the context of child BI and parental anxiety levels.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


Author(s):  
Ana Guerberof Arenas ◽  
Joss Moorkens ◽  
Sharon O’Brien

AbstractThis paper presents results of the effect of different translation modalities on users when working with the Microsoft Word user interface. An experimental study was set up with 84 Japanese, German, Spanish, and English native speakers working with Microsoft Word in three modalities: the published translated version, a machine translated (MT) version (with unedited MT strings incorporated into the MS Word interface) and the published English version. An eye-tracker measured the cognitive load and usability according to the ISO/TR 16982 guidelines: i.e., effectiveness, efficiency, and satisfaction followed by retrospective think-aloud protocol. The results show that the users’ effectiveness (number of tasks completed) does not significantly differ due to the translation modality. However, their efficiency (time for task completion) and self-reported satisfaction are significantly higher when working with the released product as opposed to the unedited MT version, especially when participants are less experienced. The eye-tracking results show that users experience a higher cognitive load when working with MT and with the human-translated versions as opposed to the English original. The results suggest that language and translation modality play a significant role in the usability of software products whether users complete the given tasks or not and even if they are unaware that MT was used to translate the interface.


Healthcare ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Chong-Bin Tsai ◽  
Wei-Yu Hung ◽  
Wei-Yen Hsu

Optokinetic nystagmus (OKN) is an involuntary eye movement induced by motion of a large proportion of the visual field. It consists of a “slow phase (SP)” with eye movements in the same direction as the movement of the pattern and a “fast phase (FP)” with saccadic eye movements in the opposite direction. Study of OKN can reveal valuable information in ophthalmology, neurology and psychology. However, the current commercially available high-resolution and research-grade eye tracker is usually expensive. Methods & Results: We developed a novel fast and effective system combined with a low-cost eye tracking device to accurately quantitatively measure OKN eye movement. Conclusions: The experimental results indicate that the proposed method achieves fast and promising results in comparisons with several traditional approaches.


Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 54
Author(s):  
Lorenzo Scalera ◽  
Stefano Seriani ◽  
Paolo Gallina ◽  
Mattia Lentini ◽  
Alessandro Gasparetto

In this paper, authors present a novel architecture for controlling an industrial robot via an eye tracking interface for artistic purposes. Humans and robots interact thanks to an acquisition system based on an eye tracker device that allows the user to control the motion of a robotic manipulator with his gaze. The feasibility of the robotic system is evaluated with experimental tests in which the robot is teleoperated to draw artistic images. The tool can be used by artists to investigate novel forms of art and by amputees or people with movement disorders or muscular paralysis, as an assistive technology for artistic drawing and painting, since, in these cases, eye motion is usually preserved.


2020 ◽  
Vol 57 (12) ◽  
pp. 1392-1401
Author(s):  
Mark P. Pressler ◽  
Emily L. Geisler ◽  
Rami R. Hallac ◽  
James R. Seaward ◽  
Alex A. Kane

Introduction and Objectives: Surgical treatment for trigonocephaly aims to eliminate a stigmatizing deformity, yet the severity that captures unwanted attention is unknown. Surgeons intervene at different points of severity, eliciting controversy. This study used eye tracking to investigate when deformity is perceived. Material and Methods: Three-dimensional photogrammetric images of a normal child and a child with trigonocephaly were mathematically deformed, in 10% increments, to create a spectrum of 11 images. These images were shown to participants using an eye tracker. Participants’ gaze patterns were analyzed, and participants were asked if each image looked “normal” or “abnormal.” Results: Sixty-six graduate students were recruited. Average dwell time toward pathologic areas of interest (AOIs) increased proportionally, from 0.77 ± 0.33 seconds at 0% deformity to 1.08 ± 0.75 seconds at 100% deformity ( P < .0001). A majority of participants did not agree an image looked “abnormal” until 90% deformity from any angle. Conclusion: Eye tracking can be used as a proxy for attention threshold toward orbitofrontal deformity. The amount of attention toward orbitofrontal AOIs increased proportionally with severity. Participants did not generally agree there was “abnormality” until deformity was severe. This study supports the assertion that surgical intervention may be best reserved for more severe deformity.


Sign in / Sign up

Export Citation Format

Share Document