3D Eye Model-Based Gaze Tracking System with a Consumer Depth Camera

Author(s):  
Liming Xu ◽  
Jiannan Chi
1997 ◽  
Vol 06 (02) ◽  
pp. 193-209 ◽  
Author(s):  
Rainer Stiefelhagen ◽  
Jie Yang ◽  
Alex Waibel

In this paper we present a non-intrusive model-based gaze tracking system. The system estimates the 3-D pose of a user's head by tracking as few as six facial feature points. The system locates a human face using a statistical color model and then finds and tracks the facial features, such as eyes, nostrils and lip corners. A full perspective model is employed to map these feature points onto the 3D pose. Several techniques have been developed to track the features points and recover from failure. We currently achieve a frame rate of 15+ frames per second using an HP 9000 workstation with a framegrabber and a Canon VC-C1 camera. The application of the system has been demonstrated by a gaze-driven panorama image viewer. The potential applications of the system include multimodal interfaces, virtual reality and video-teleconferencing.


2018 ◽  
Vol 8 (10) ◽  
pp. 1769
Author(s):  
Zijing Wan ◽  
Xiangjun Wang ◽  
Lei Yin ◽  
Kai Zhou

This paper proposes a 3D point-of-regard estimation method based on 3D eye model and a corresponding head-mounted gaze tracking device. Firstly, a head-mounted gaze tracking system is given. The gaze tracking device uses two pairs of stereo cameras to capture the left and right eye images, respectively, and then sets a pair of scene cameras to capture the scene images. Secondly, a 3D eye model and the calibration process are established. Common eye features are used to estimate the eye model parameters. Thirdly, a 3D point-of-regard estimation algorithm is proposed. Three main parts of this method are summarized as follows: (1) the spatial coordinates of the eye features are directly calculated by using stereo cameras; (2) the pupil center normal is used to the initial value for the estimation of optical axis; (3) a pair of scene cameras are used to solve the actual position of the objects being watched in the calibration process, and the calibration for the proposed eye model does not need the assistance of the light source. Experimental results show that the proposed method can output the coordinates of 3D point-of-regard more accurately.


2010 ◽  
Vol 36 (8) ◽  
pp. 1051-1061 ◽  
Author(s):  
Chuang ZHANG ◽  
Jian-Nan CHI ◽  
Zhao-Hui ZHANG ◽  
Zhi-Liang WANG

2021 ◽  
Vol 11 (2) ◽  
pp. 851
Author(s):  
Wei-Liang Ou ◽  
Tzu-Ling Kuo ◽  
Chin-Chieh Chang ◽  
Chih-Peng Fan

In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform.


2009 ◽  
Vol 30 (12) ◽  
pp. 1144-1150 ◽  
Author(s):  
Diego Torricelli ◽  
Michela Goffredo ◽  
Silvia Conforto ◽  
Maurizio Schmid

2020 ◽  
Vol 83 (1) ◽  
pp. 23
Author(s):  
Byoung Keon D. Park ◽  
Jian Wan ◽  
Ksenia Kozak ◽  
Matthew P. Reed
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document