scholarly journals A Novel Method for Estimating Free Space 3D Point-of-Regard Using Pupillary Reflex and Line-of-Sight Convergence Points

Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2292 ◽  
Author(s):  
Zijing Wan ◽  
Xiangjun Wang ◽  
Kai Zhou ◽  
Xiaoyun Chen ◽  
Xiaoqing Wang

In this paper, a novel 3D gaze estimation method for a wearable gaze tracking device is proposed. This method is based on the pupillary accommodation reflex of human vision. Firstly, a 3D gaze measurement model is built. By uniting the line-of-sight convergence point and the size of the pupil, this model can be used to measure the 3D Point-of-Regard in free space. Secondly, a gaze tracking device is described. By using four cameras and semi-transparent mirrors, the gaze tracking device can accurately extract the spatial coordinates of the pupil and eye corner of the human eye from images. Thirdly, a simple calibration process of the measuring system is proposed. This method can be sketched as follows: (1) each eye is imaged by a pair of binocular stereo cameras, and the setting of semi-transparent mirrors can support a better field of view; (2) the spatial coordinates of the pupil center and the inner corner of the eye in the images of the stereo cameras are extracted, and the pupil size is calculated with the features of the gaze estimation method; (3) the pupil size and the line-of-sight convergence point when watching the calibration target at different distances are computed, and the parameters of the gaze estimation model are determined. Fourthly, an algorithm for searching the line-of-sight convergence point is proposed, and the 3D Point-of-Regard is estimated by using the obtained line-of-sight measurement model. Three groups of experiments were conducted to prove the effectiveness of the proposed method. This approach enables people to obtain the spatial coordinates of the Point-of-Regard in free space, which has great potential in the application of wearable devices.

2018 ◽  
Vol 8 (10) ◽  
pp. 1769
Author(s):  
Zijing Wan ◽  
Xiangjun Wang ◽  
Lei Yin ◽  
Kai Zhou

This paper proposes a 3D point-of-regard estimation method based on 3D eye model and a corresponding head-mounted gaze tracking device. Firstly, a head-mounted gaze tracking system is given. The gaze tracking device uses two pairs of stereo cameras to capture the left and right eye images, respectively, and then sets a pair of scene cameras to capture the scene images. Secondly, a 3D eye model and the calibration process are established. Common eye features are used to estimate the eye model parameters. Thirdly, a 3D point-of-regard estimation algorithm is proposed. Three main parts of this method are summarized as follows: (1) the spatial coordinates of the eye features are directly calculated by using stereo cameras; (2) the pupil center normal is used to the initial value for the estimation of optical axis; (3) a pair of scene cameras are used to solve the actual position of the objects being watched in the calibration process, and the calibration for the proposed eye model does not need the assistance of the light source. Experimental results show that the proposed method can output the coordinates of 3D point-of-regard more accurately.


2021 ◽  
Vol 11 (19) ◽  
pp. 9068
Author(s):  
Mohd Faizan Ansari ◽  
Pawel Kasprowski ◽  
Marcin Obetkal

Gaze estimation plays a significant role in understating human behavior and in human–computer interaction. Currently, there are many methods accessible for gaze estimation. However, most approaches need additional hardware for data acquisition which adds an extra cost to gaze tracking. The classic gaze tracking approaches usually require systematic prior knowledge or expertise for practical operations. Moreover, they are fundamentally based on the characteristics of the eye region, utilizing infrared light and iris glint to track the gaze point. It requires high-quality images with particular environmental conditions and another light source. Recent studies on appearance-based gaze estimation have demonstrated the capability of neural networks, especially convolutional neural networks (CNN), to decode gaze information present in eye images and achieved significantly simplified gaze estimation. In this paper, a gaze estimation method that utilizes a CNN for gaze estimation that can be applied to various platforms without additional hardware is presented. An easy and fast data collection method is used for collecting face and eyes images from an unmodified desktop camera. The proposed method registered good results; it proves that it is possible to predict the gaze with reasonable accuracy without any additional tools.


2020 ◽  
Vol 10 (24) ◽  
pp. 9079
Author(s):  
Kaiqing Luo ◽  
Xuan Jia ◽  
Hua Xiao ◽  
Dongmei Liu ◽  
Li Peng ◽  
...  

In recent years, the gaze estimation system, as a new type of human-computer interaction technology, has received extensive attention. The gaze estimation model is one of the main research contents of the system. The quality of the model will directly affect the accuracy of the entire gaze estimation system. To achieve higher accuracy even with simple devices, this paper proposes an improved mapping equation model based on homography transformation. In the process of experiment, the model mainly uses the “Zhang Zhengyou calibration method” to obtain the internal and external parameters of the camera to correct the distortion of the camera, and uses the LM(Levenberg-Marquardt) algorithm to solve the unknown parameters contained in the mapping equation. After all the parameters of the equation are determined, the gaze point is calculated. Different comparative experiments are designed to verify the experimental accuracy and fitting effect of this mapping equation. The results show that the method can achieve high experimental accuracy, and the basic accuracy is kept within 0.6∘. The overall trend shows that the mapping method based on homography transformation has higher experimental accuracy, better fitting effect and stronger stability.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3650 ◽  
Author(s):  
Muhammad Syaiful Amri bin Suhaimi ◽  
Kojiro Matsushita ◽  
Minoru Sasaki ◽  
Waweru Njeri

This paper sought to improve the precision of the Alternating Current Electro-Occulo-Graphy (AC-EOG) gaze estimation method. The method consisted of two core techniques: To estimate eyeball movement from EOG signals and to convert signals from the eyeball movement to the gaze position. In conventional research, the estimations are computed with two EOG signals corresponding to vertical and horizontal movements. The conversion is based on the affine transformation and those parameters are computed with 24-point gazing data at the calibration. However, the transformation is not applied to all the 24-point gazing data, but to four spatially separated data (Quadrant method), and each result has different characteristics. Thus, we proposed the conversion method for 24-point gazing data at the same time: To assume an imaginary center (i.e., 25th point) on gaze coordinates with 24-point gazing data and apply an affine transformation to 24-point gazing data. Then, we conducted a comparative investigation between the conventional method and the proposed method. From the results, the average eye angle error for the cross-shaped electrode attachment is x = 2.27 ° ± 0.46 ° and y = 1.83 ° ± 0.34 ° . In contrast, for the plus-shaped electrode attachment, the average eye angle error is is x = 0.94 ° ± 0.19 ° and y = 1.48 ° ± 0.27 ° . We concluded that the proposed method offers a simpler and more precise EOG gaze estimation than the conventional method.


2006 ◽  
Vol 5 (3) ◽  
pp. 41-45 ◽  
Author(s):  
Yong-Moo Kwon ◽  
Kyeong-Won Jeon ◽  
Jeongseok Ki ◽  
Qonita M. Shahab ◽  
Sangwoo Jo ◽  
...  

There are several researches on 2D gaze tracking techniques to the 2D screen for the Human-Computer Interaction. However, the researches for the gaze-based interaction to the stereo images or 3D contents are not reported. The stereo display techniques are emerging now for the reality service. Moreover, the 3D interaction techniques are needed in the 3D contents service environments. This paper presents 3D gaze estimation technique and its application to gaze-based interaction in the parallax barrier stereo display


2013 ◽  
Vol 655-657 ◽  
pp. 1066-1076 ◽  
Author(s):  
Bo Zhu ◽  
Peng Yun Zhang ◽  
Jian Nan Chi ◽  
Tian Xia Zhang

A new gaze tracking method used in single camera gaze tracking system is proposed. The method can be divided into human face and eye location, human features detection and gaze parameters extraction, and ELM based gaze point estimation. In face and eye location, a face detection method which combines skin color model with Adaboost method is used for fast human face detection. In eye features and gaze parameters extraction, many image processing methods are used to detect eye features such as iris center, inner eye corner and so on. And then gaze parameter which is the vector from iris center to eye corner is obtained. After above an ELM based gaze point on the screen estimation method is proposed to establish the mapping relationship between gaze parameter and gaze point. The experimental results illustrate that the method in this paper is effective to do gaze estimation in single camera gaze tracking system.


2019 ◽  
Vol 6 ◽  
pp. 176-191
Author(s):  
David Gil de Gómez Pérez ◽  
Roman Bednarik

Pupil center and pupil contour are two of the most important features in the eye-image used for video-based eye-tracking. Well annotated databases are needed in order to allow benchmarking of the available- and new pupil detection and gaze estimation algorithms. Unfortunately, creation of such a data set is costly and requires a lot of efforts, including manual work of the annotators. In addition, reliability of manual annotations is hard to establish with a low number of annotators. In order to facilitate progress of the gaze tracking algorithm research, we created an online pupil annotation tool that engages many users to interact through gamification and allows utilization of the crowd power to create reliable annotations \cite{artstein2005bias}. We describe the tool and the mechanisms employed, and report results on the annotation of a publicly available data set. Finally, we demonstrate an example utilization of the new high-quality annotation on a comparison of two state-of-the-art pupil center algorithms.


2017 ◽  
Vol 10 (4) ◽  
Author(s):  
Miika Toivanen ◽  
Kristian Lukander ◽  
Kai Puolamäki

This paper presents a method for computing the gaze point using camera data captured with a wearable gaze tracking device. The method utilizes a physical model of the human eye, advanced Bayesian computer vision algorithms, and Kalman filtering, resulting in high accuracy and low noise. Our C++ implementation can process camera streams with 30 frames per second in realtime. The performance of the system is validated in an exhaustive experimental setup with 19 participants, using a self-made device. Due to the used eye model and binocular cameras, the system is accurate for all distances and invariant to device movement. We also test our system against a best-in-class commercial device which is outperformed for spatial accuracy and precision. The software and hardware instructions as well as the experimental data are published as open source.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


Author(s):  
Takashi Nagamatsu ◽  
Yukina Iwamoto ◽  
Junzo Kamahara ◽  
Naoki Tanaka ◽  
Michiya Yamamoto

Sign in / Sign up

Export Citation Format

Share Document