A Novel Multi-Camera Global Calibration Method for Gaze Tracking System

2020 ◽  
Vol 69 (5) ◽  
pp. 2093-2104 ◽  
Author(s):  
Jiannan Chi ◽  
Zuoyun Yang ◽  
Guosheng Zhang ◽  
Tongbo Liu ◽  
Zhiliang Wang
2010 ◽  
Vol 36 (8) ◽  
pp. 1051-1061 ◽  
Author(s):  
Chuang ZHANG ◽  
Jian-Nan CHI ◽  
Zhao-Hui ZHANG ◽  
Zhi-Liang WANG

2021 ◽  
Vol 11 (2) ◽  
pp. 851
Author(s):  
Wei-Liang Ou ◽  
Tzu-Ling Kuo ◽  
Chin-Chieh Chang ◽  
Chih-Peng Fan

In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform.


2009 ◽  
Vol 30 (12) ◽  
pp. 1144-1150 ◽  
Author(s):  
Diego Torricelli ◽  
Michela Goffredo ◽  
Silvia Conforto ◽  
Maurizio Schmid

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 48840-48849 ◽  
Author(s):  
Mun-Cheon Kang ◽  
Cheol-Hwan Yoo ◽  
Kwang-Hyun Uhm ◽  
Dae-Hong Lee ◽  
Sung-Jea Ko

Author(s):  
Mingchi Feng ◽  
Xiang Jia ◽  
Jingshu Wang ◽  
Song Feng ◽  
Taixiong Zheng

Multi-cameras system is widely applied in 3D computer vision especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-cameras system are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-cameras system based on transparent glass checkerboard and ray tracing is described, which is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera is obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on another side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce calibration error. A multi-cameras calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of 4-cameras system are 0.00007 and 0.4543 pixel. The proposed method is flexible, high accurate, and simple to carry out.


Sign in / Sign up

Export Citation Format

Share Document