scholarly journals Gaze Tracking Using an Unmodified Web Camera and Convolutional Neural Network

2021 ◽  
Vol 11 (19) ◽  
pp. 9068
Author(s):  
Mohd Faizan Ansari ◽  
Pawel Kasprowski ◽  
Marcin Obetkal

Gaze estimation plays a significant role in understating human behavior and in human–computer interaction. Currently, there are many methods accessible for gaze estimation. However, most approaches need additional hardware for data acquisition which adds an extra cost to gaze tracking. The classic gaze tracking approaches usually require systematic prior knowledge or expertise for practical operations. Moreover, they are fundamentally based on the characteristics of the eye region, utilizing infrared light and iris glint to track the gaze point. It requires high-quality images with particular environmental conditions and another light source. Recent studies on appearance-based gaze estimation have demonstrated the capability of neural networks, especially convolutional neural networks (CNN), to decode gaze information present in eye images and achieved significantly simplified gaze estimation. In this paper, a gaze estimation method that utilizes a CNN for gaze estimation that can be applied to various platforms without additional hardware is presented. An easy and fast data collection method is used for collecting face and eyes images from an unmodified desktop camera. The proposed method registered good results; it proves that it is possible to predict the gaze with reasonable accuracy without any additional tools.

Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2292 ◽  
Author(s):  
Zijing Wan ◽  
Xiangjun Wang ◽  
Kai Zhou ◽  
Xiaoyun Chen ◽  
Xiaoqing Wang

In this paper, a novel 3D gaze estimation method for a wearable gaze tracking device is proposed. This method is based on the pupillary accommodation reflex of human vision. Firstly, a 3D gaze measurement model is built. By uniting the line-of-sight convergence point and the size of the pupil, this model can be used to measure the 3D Point-of-Regard in free space. Secondly, a gaze tracking device is described. By using four cameras and semi-transparent mirrors, the gaze tracking device can accurately extract the spatial coordinates of the pupil and eye corner of the human eye from images. Thirdly, a simple calibration process of the measuring system is proposed. This method can be sketched as follows: (1) each eye is imaged by a pair of binocular stereo cameras, and the setting of semi-transparent mirrors can support a better field of view; (2) the spatial coordinates of the pupil center and the inner corner of the eye in the images of the stereo cameras are extracted, and the pupil size is calculated with the features of the gaze estimation method; (3) the pupil size and the line-of-sight convergence point when watching the calibration target at different distances are computed, and the parameters of the gaze estimation model are determined. Fourthly, an algorithm for searching the line-of-sight convergence point is proposed, and the 3D Point-of-Regard is estimated by using the obtained line-of-sight measurement model. Three groups of experiments were conducted to prove the effectiveness of the proposed method. This approach enables people to obtain the spatial coordinates of the Point-of-Regard in free space, which has great potential in the application of wearable devices.


2020 ◽  
Vol 10 (24) ◽  
pp. 9079
Author(s):  
Kaiqing Luo ◽  
Xuan Jia ◽  
Hua Xiao ◽  
Dongmei Liu ◽  
Li Peng ◽  
...  

In recent years, the gaze estimation system, as a new type of human-computer interaction technology, has received extensive attention. The gaze estimation model is one of the main research contents of the system. The quality of the model will directly affect the accuracy of the entire gaze estimation system. To achieve higher accuracy even with simple devices, this paper proposes an improved mapping equation model based on homography transformation. In the process of experiment, the model mainly uses the “Zhang Zhengyou calibration method” to obtain the internal and external parameters of the camera to correct the distortion of the camera, and uses the LM(Levenberg-Marquardt) algorithm to solve the unknown parameters contained in the mapping equation. After all the parameters of the equation are determined, the gaze point is calculated. Different comparative experiments are designed to verify the experimental accuracy and fitting effect of this mapping equation. The results show that the method can achieve high experimental accuracy, and the basic accuracy is kept within 0.6∘. The overall trend shows that the mapping method based on homography transformation has higher experimental accuracy, better fitting effect and stronger stability.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3650 ◽  
Author(s):  
Muhammad Syaiful Amri bin Suhaimi ◽  
Kojiro Matsushita ◽  
Minoru Sasaki ◽  
Waweru Njeri

This paper sought to improve the precision of the Alternating Current Electro-Occulo-Graphy (AC-EOG) gaze estimation method. The method consisted of two core techniques: To estimate eyeball movement from EOG signals and to convert signals from the eyeball movement to the gaze position. In conventional research, the estimations are computed with two EOG signals corresponding to vertical and horizontal movements. The conversion is based on the affine transformation and those parameters are computed with 24-point gazing data at the calibration. However, the transformation is not applied to all the 24-point gazing data, but to four spatially separated data (Quadrant method), and each result has different characteristics. Thus, we proposed the conversion method for 24-point gazing data at the same time: To assume an imaginary center (i.e., 25th point) on gaze coordinates with 24-point gazing data and apply an affine transformation to 24-point gazing data. Then, we conducted a comparative investigation between the conventional method and the proposed method. From the results, the average eye angle error for the cross-shaped electrode attachment is x = 2.27 ° ± 0.46 ° and y = 1.83 ° ± 0.34 ° . In contrast, for the plus-shaped electrode attachment, the average eye angle error is is x = 0.94 ° ± 0.19 ° and y = 1.48 ° ± 0.27 ° . We concluded that the proposed method offers a simpler and more precise EOG gaze estimation than the conventional method.


2006 ◽  
Vol 5 (3) ◽  
pp. 41-45 ◽  
Author(s):  
Yong-Moo Kwon ◽  
Kyeong-Won Jeon ◽  
Jeongseok Ki ◽  
Qonita M. Shahab ◽  
Sangwoo Jo ◽  
...  

There are several researches on 2D gaze tracking techniques to the 2D screen for the Human-Computer Interaction. However, the researches for the gaze-based interaction to the stereo images or 3D contents are not reported. The stereo display techniques are emerging now for the reality service. Moreover, the 3D interaction techniques are needed in the 3D contents service environments. This paper presents 3D gaze estimation technique and its application to gaze-based interaction in the parallax barrier stereo display


2013 ◽  
Vol 655-657 ◽  
pp. 1066-1076 ◽  
Author(s):  
Bo Zhu ◽  
Peng Yun Zhang ◽  
Jian Nan Chi ◽  
Tian Xia Zhang

A new gaze tracking method used in single camera gaze tracking system is proposed. The method can be divided into human face and eye location, human features detection and gaze parameters extraction, and ELM based gaze point estimation. In face and eye location, a face detection method which combines skin color model with Adaboost method is used for fast human face detection. In eye features and gaze parameters extraction, many image processing methods are used to detect eye features such as iris center, inner eye corner and so on. And then gaze parameter which is the vector from iris center to eye corner is obtained. After above an ELM based gaze point on the screen estimation method is proposed to establish the mapping relationship between gaze parameter and gaze point. The experimental results illustrate that the method in this paper is effective to do gaze estimation in single camera gaze tracking system.


2012 ◽  
Vol 24 (03) ◽  
pp. 217-227 ◽  
Author(s):  
Xiao-Hui Yang ◽  
Jian-De Sun ◽  
Ju Liu ◽  
Xin-Chao Li ◽  
Cai-Xia Yang ◽  
...  

Gaze tracking has drawn increasing attention and applied wildly in the areas of disabled aids, medical diagnosis, etc. In this paper, a remote gaze tracking system is proposed. The system is video-based, and the video is captured under the illumination of near infrared light sources. Only one camera is employed in the system, which keeps the equipment portable for the users. The corneal glints and the pupil center, whose extraction accuracy determines the performance of the gaze tracking system, are obtained according to the gray distribution of the video frame. And then, the positions of the points on the screen that the user fixating are estimated by the gaze tracking algorithm based on cross-ratio-invariant. Additionally, a calibration procedure is necessary to eliminate the error produced by the deviation of the optical and visual axes. The proposed remote gaze tracking system has a low computational complexity and high robustness, and experiment results indicate that it is tolerant of head movement and still works well for users wearing glasses as well. Besides, the angle error of the gaze tracking system is 0.40 degree of the subjects without glasses, correspondingly, 0.48 degree of the subjects with glasses, which is comparable to most of the existing commercial systems and promising for most of the potential practical applications.


2019 ◽  
Vol 6 ◽  
pp. 176-191
Author(s):  
David Gil de Gómez Pérez ◽  
Roman Bednarik

Pupil center and pupil contour are two of the most important features in the eye-image used for video-based eye-tracking. Well annotated databases are needed in order to allow benchmarking of the available- and new pupil detection and gaze estimation algorithms. Unfortunately, creation of such a data set is costly and requires a lot of efforts, including manual work of the annotators. In addition, reliability of manual annotations is hard to establish with a low number of annotators. In order to facilitate progress of the gaze tracking algorithm research, we created an online pupil annotation tool that engages many users to interact through gamification and allows utilization of the crowd power to create reliable annotations \cite{artstein2005bias}. We describe the tool and the mechanisms employed, and report results on the annotation of a publicly available data set. Finally, we demonstrate an example utilization of the new high-quality annotation on a comparison of two state-of-the-art pupil center algorithms.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1917
Author(s):  
Ko-Feng Lee ◽  
Yen-Lin Chen ◽  
Chao-Wei Yu ◽  
Kai-Yi Chin ◽  
Chen-Han Wu

In this study, a head-mounted device was developed to track the gaze of the eyes and estimate the gaze point on the user’s visual plane. To provide a cost-effective vision tracking solution, this head-mounted device is combined with a sized endoscope camera, infrared light, and mobile phone; the devices are also implemented via 3D printing to reduce costs. Based on the proposed image pre-processing techniques, the system can efficiently extract and estimate the pupil ellipse from the camera module. A 3D eye model was also developed to effectively locate eye gaze points from extracted eye images. In the experimental results, average accuracy, precision, and recall rates of the proposed system can achieve an average of over 97%, which can demonstrate the efficiency of the proposed system. This study can be widely used in the Internet of Things, virtual reality, assistive devices, and human-computer interaction applications.


Sign in / Sign up

Export Citation Format

Share Document