scholarly journals The Gaze Tracking System with Natural Head Motion Compensation

Informatica ◽  
2012 ◽  
Vol 23 (1) ◽  
pp. 105-124 ◽  
Author(s):  
Vidas Raudonis ◽  
Agnė Paulauskaitė-Tarasevičienė ◽  
Laura Kižauskienė
Author(s):  
Sinh Huynh ◽  
Rajesh Krishna Balan ◽  
JeongGil Ko

Gaze tracking is a key building block used in many mobile applications including entertainment, personal productivity, accessibility, medical diagnosis, and visual attention monitoring. In this paper, we present iMon, an appearance-based gaze tracking system that is both designed for use on mobile phones and has significantly greater accuracy compared to prior state-of-the-art solutions. iMon achieves this by comprehensively considering the gaze estimation pipeline and then overcoming three different sources of errors. First, instead of assuming that the user's gaze is fixed to a single 2D coordinate, we construct each gaze label using a probabilistic 2D heatmap gaze representation input to overcome errors caused by microsaccade eye motions that cause the exact gaze point to be uncertain. Second, we design an image enhancement model to refine visual details and remove motion blur effects of input eye images. Finally, we apply a calibration scheme to correct for differences between the perceived and actual gaze points caused by individual Kappa angle differences. With all these improvements, iMon achieves a person-independent per-frame tracking error of 1.49 cm (on smartphones) and 1.94 cm (on tablets) when tested with the GazeCapture dataset and 2.01 cm with the TabletGaze dataset. This outperforms the previous state-of-the-art solutions by ~22% to 28%. By averaging multiple per-frame estimations that belong to the same fixation point and applying personal calibration, the tracking error is further reduced to 1.11 cm (smartphones) and 1.59 cm (tablets). Finally, we built implementations that run on an iPhone 12 Pro and show that our mobile implementation of iMon can run at up to 60 frames per second - thus making gaze-based control of applications possible.


2018 ◽  
Vol 11 (4) ◽  
Author(s):  
Feng Xiao ◽  
Dandan Zheng ◽  
Kejie Huang ◽  
Yue Qiu ◽  
Haibin Shen

Gaze tracking is a human-computer interaction technology, and it has been widely studied in the academic and industrial fields. However, constrained by the performance of the specific sensors and algorithms, it has not been popularized for everyone. This paper proposes a single-camera gaze tracking system under natural light to enable its versatility. The iris center and anchor point are the most crucial factors for the accuracy of the system. The accurate iris center is detected by the simple active contour snakuscule, which is initialized by the prior knowledge of eye anatomical dimensions. After that, a novel anchor point is computed by the stable facial landmarks. Next, second-order mapping functions use the eye vectors and the head pose to estimate the points of regard. Finally, the gaze errors are improved by implementing a weight coefficient on the points of regard of the left and right eyes. The feature position of the iris center achieves an accuracy of 98.87% on the GI4E database when the normalized error is lower than 0.05. The accuracy of the gaze tracking method is superior to the-state-of-the-art appearance-based and feature-based methods on the EYEDIAP database.


Author(s):  
ARANTXA VILLANUEVA ◽  
RAFAEL CABEZA ◽  
SONIA PORTA

In the past years, research in eye tracking development and applications has attracted much attention and the possibility of interacting with a computer employing just gaze information is becoming more and more feasible. Efforts in eye tracking cover a broad spectrum of fields, system mathematical modeling being an important aspect in this research. Expressions relating to several elements and variables of the gaze tracker would lead to establish geometric relations and to find out symmetrical behaviors of the human eye when looking at a screen. To this end a deep knowledge of projective geometry as well as eye physiology and kinematics are basic. This paper presents a model for a bright-pupil technique tracker fully based on realistic parameters describing the system elements. The system so modeled is superior to that obtained with generic expressions based on linear or quadratic expressions. Moreover, model symmetry knowledge leads to more effective and simpler calibration strategies, resulting in just two calibration points needed to fit the optical axis and only three points to adjust the visual axis. Reducing considerably the time spent by other systems employing more calibration points renders a more attractive model.


2012 ◽  
Vol 24 (03) ◽  
pp. 217-227 ◽  
Author(s):  
Xiao-Hui Yang ◽  
Jian-De Sun ◽  
Ju Liu ◽  
Xin-Chao Li ◽  
Cai-Xia Yang ◽  
...  

Gaze tracking has drawn increasing attention and applied wildly in the areas of disabled aids, medical diagnosis, etc. In this paper, a remote gaze tracking system is proposed. The system is video-based, and the video is captured under the illumination of near infrared light sources. Only one camera is employed in the system, which keeps the equipment portable for the users. The corneal glints and the pupil center, whose extraction accuracy determines the performance of the gaze tracking system, are obtained according to the gray distribution of the video frame. And then, the positions of the points on the screen that the user fixating are estimated by the gaze tracking algorithm based on cross-ratio-invariant. Additionally, a calibration procedure is necessary to eliminate the error produced by the deviation of the optical and visual axes. The proposed remote gaze tracking system has a low computational complexity and high robustness, and experiment results indicate that it is tolerant of head movement and still works well for users wearing glasses as well. Besides, the angle error of the gaze tracking system is 0.40 degree of the subjects without glasses, correspondingly, 0.48 degree of the subjects with glasses, which is comparable to most of the existing commercial systems and promising for most of the potential practical applications.


2020 ◽  
Vol 30 (07) ◽  
pp. 2050025 ◽  
Author(s):  
Javier De Lope ◽  
Manuel Graña

Noninvasive behavior observation techniques allow more natural human behavior assessment experiments with higher ecological validity. We propose the use of gaze ethograms in the context of user interaction with a computer display to characterize the user’s behavioral activity. A gaze ethogram is a time sequence of the screen regions the user is looking at. It can be used for the behavioral modeling of the user. Given a rough partition of the display space, we are able to extract gaze ethograms that allow discrimination of three common user behavioral activities: reading a text, viewing a video clip, and writing a text. A gaze tracking system is used to build the gaze ethogram. User behavioral activity is modeled by a classifier of gaze ethograms able to recognize the user activity after training. Conventional commercial gaze tracking for research in the neurosciences and psychology science are expensive and intrusive, sometimes impose wearing uncomfortable appliances. For the purposes of our behavioral research, we have developed an open source gaze tracking system that runs on conventional laptop computers using their low quality cameras. Some of the gaze tracking pipeline elements have been borrowed from the open source community. However, we have developed innovative solutions to some of the key issues that arise in the gaze tracker. Specifically, we have proposed texture-based eye features that are quite robust to low quality images. These features are the input for a classifier predicting the screen target area, the user is looking at. We report comparative results of several classifier architectures carried out in order to select the classifier to be used to extract the gaze ethograms for our behavioral research. We perform another classifier selection at the level of ethogram classification. Finally, we report encouraging results of user behavioral activity recognition experiments carried out over an inhouse dataset.


2012 ◽  
Vol 263-266 ◽  
pp. 2399-2402
Author(s):  
Chi Wu Huang ◽  
Zong Sian Jiang ◽  
Wei Fan Kao ◽  
Yen Lin Huang

This paper presents the developing of a low-cost eye-tracking system by modifying the commercial-over-the-shelf camera to integrate with the proper-tuned open source drivers and the user-defined application programs. The system configuration is proposed and the gaze-tracking approximated by the least square polynomial mapping is described. Comparisons between other low-cost systems as well as commercial system are provided. Our system obtained the highest image capturing rate of 180 frames per second, and the ISO 9241-Part 9 test performance favored our system, in terms of Response time and Correct response rate. Currently, we are developing gaze-tracking accuracy application. The real time gaze-tracking and the Head Movement Estimation are the issues in future work.


2010 ◽  
Vol 36 (8) ◽  
pp. 1051-1061 ◽  
Author(s):  
Chuang ZHANG ◽  
Jian-Nan CHI ◽  
Zhao-Hui ZHANG ◽  
Zhi-Liang WANG

2021 ◽  
Vol 11 (2) ◽  
pp. 851
Author(s):  
Wei-Liang Ou ◽  
Tzu-Ling Kuo ◽  
Chin-Chieh Chang ◽  
Chih-Peng Fan

In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform.


Sign in / Sign up

Export Citation Format

Share Document