Gaze estimation method based on an aspherical model of the cornea

Author(s):  
Takashi Nagamatsu ◽  
Yukina Iwamoto ◽  
Junzo Kamahara ◽  
Naoki Tanaka ◽  
Michiya Yamamoto
Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


2020 ◽  
Vol 10 (24) ◽  
pp. 9079
Author(s):  
Kaiqing Luo ◽  
Xuan Jia ◽  
Hua Xiao ◽  
Dongmei Liu ◽  
Li Peng ◽  
...  

In recent years, the gaze estimation system, as a new type of human-computer interaction technology, has received extensive attention. The gaze estimation model is one of the main research contents of the system. The quality of the model will directly affect the accuracy of the entire gaze estimation system. To achieve higher accuracy even with simple devices, this paper proposes an improved mapping equation model based on homography transformation. In the process of experiment, the model mainly uses the “Zhang Zhengyou calibration method” to obtain the internal and external parameters of the camera to correct the distortion of the camera, and uses the LM(Levenberg-Marquardt) algorithm to solve the unknown parameters contained in the mapping equation. After all the parameters of the equation are determined, the gaze point is calculated. Different comparative experiments are designed to verify the experimental accuracy and fitting effect of this mapping equation. The results show that the method can achieve high experimental accuracy, and the basic accuracy is kept within 0.6∘. The overall trend shows that the mapping method based on homography transformation has higher experimental accuracy, better fitting effect and stronger stability.


Sensors ◽  
2015 ◽  
Vol 15 (3) ◽  
pp. 5935-5981 ◽  
Author(s):  
Jong Lee ◽  
Hyeon Lee ◽  
Su Gwon ◽  
Dongwook Jung ◽  
Weiyuan Pan ◽  
...  

2020 ◽  
Vol 69 (12) ◽  
pp. 9695-9708
Author(s):  
Jiannan Chi ◽  
Jiahui Liu ◽  
Feng Wang ◽  
Yingkai Chi ◽  
Zeng-Guang Hou

2015 ◽  
Vol 54 (8) ◽  
pp. 083103 ◽  
Author(s):  
Chunfei Ma ◽  
Kang-A Choi ◽  
Byeong-Doo Choi ◽  
Sung-Jea Ko

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 23291-23302
Author(s):  
Ben Yang ◽  
Xuetao Zhang ◽  
Zhongchang Li ◽  
Shaoyi Du ◽  
Fei Wang

Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3650 ◽  
Author(s):  
Muhammad Syaiful Amri bin Suhaimi ◽  
Kojiro Matsushita ◽  
Minoru Sasaki ◽  
Waweru Njeri

This paper sought to improve the precision of the Alternating Current Electro-Occulo-Graphy (AC-EOG) gaze estimation method. The method consisted of two core techniques: To estimate eyeball movement from EOG signals and to convert signals from the eyeball movement to the gaze position. In conventional research, the estimations are computed with two EOG signals corresponding to vertical and horizontal movements. The conversion is based on the affine transformation and those parameters are computed with 24-point gazing data at the calibration. However, the transformation is not applied to all the 24-point gazing data, but to four spatially separated data (Quadrant method), and each result has different characteristics. Thus, we proposed the conversion method for 24-point gazing data at the same time: To assume an imaginary center (i.e., 25th point) on gaze coordinates with 24-point gazing data and apply an affine transformation to 24-point gazing data. Then, we conducted a comparative investigation between the conventional method and the proposed method. From the results, the average eye angle error for the cross-shaped electrode attachment is x = 2.27 ° ± 0.46 ° and y = 1.83 ° ± 0.34 ° . In contrast, for the plus-shaped electrode attachment, the average eye angle error is is x = 0.94 ° ± 0.19 ° and y = 1.48 ° ± 0.27 ° . We concluded that the proposed method offers a simpler and more precise EOG gaze estimation than the conventional method.


2015 ◽  
Vol 61 (2) ◽  
pp. 254-260 ◽  
Author(s):  
Yong-Goo Shin ◽  
Kang-A Choi ◽  
Sung-Tae Kim ◽  
Sung-Jea Ko

Sign in / Sign up

Export Citation Format

Share Document