scholarly journals A Method of Free-Space Point-of-Regard Estimation Based on 3D Eye Model and Stereo Vision

2018 ◽  
Vol 8 (10) ◽  
pp. 1769
Author(s):  
Zijing Wan ◽  
Xiangjun Wang ◽  
Lei Yin ◽  
Kai Zhou

This paper proposes a 3D point-of-regard estimation method based on 3D eye model and a corresponding head-mounted gaze tracking device. Firstly, a head-mounted gaze tracking system is given. The gaze tracking device uses two pairs of stereo cameras to capture the left and right eye images, respectively, and then sets a pair of scene cameras to capture the scene images. Secondly, a 3D eye model and the calibration process are established. Common eye features are used to estimate the eye model parameters. Thirdly, a 3D point-of-regard estimation algorithm is proposed. Three main parts of this method are summarized as follows: (1) the spatial coordinates of the eye features are directly calculated by using stereo cameras; (2) the pupil center normal is used to the initial value for the estimation of optical axis; (3) a pair of scene cameras are used to solve the actual position of the objects being watched in the calibration process, and the calibration for the proposed eye model does not need the assistance of the light source. Experimental results show that the proposed method can output the coordinates of 3D point-of-regard more accurately.

Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2292 ◽  
Author(s):  
Zijing Wan ◽  
Xiangjun Wang ◽  
Kai Zhou ◽  
Xiaoyun Chen ◽  
Xiaoqing Wang

In this paper, a novel 3D gaze estimation method for a wearable gaze tracking device is proposed. This method is based on the pupillary accommodation reflex of human vision. Firstly, a 3D gaze measurement model is built. By uniting the line-of-sight convergence point and the size of the pupil, this model can be used to measure the 3D Point-of-Regard in free space. Secondly, a gaze tracking device is described. By using four cameras and semi-transparent mirrors, the gaze tracking device can accurately extract the spatial coordinates of the pupil and eye corner of the human eye from images. Thirdly, a simple calibration process of the measuring system is proposed. This method can be sketched as follows: (1) each eye is imaged by a pair of binocular stereo cameras, and the setting of semi-transparent mirrors can support a better field of view; (2) the spatial coordinates of the pupil center and the inner corner of the eye in the images of the stereo cameras are extracted, and the pupil size is calculated with the features of the gaze estimation method; (3) the pupil size and the line-of-sight convergence point when watching the calibration target at different distances are computed, and the parameters of the gaze estimation model are determined. Fourthly, an algorithm for searching the line-of-sight convergence point is proposed, and the 3D Point-of-Regard is estimated by using the obtained line-of-sight measurement model. Three groups of experiments were conducted to prove the effectiveness of the proposed method. This approach enables people to obtain the spatial coordinates of the Point-of-Regard in free space, which has great potential in the application of wearable devices.


2013 ◽  
Vol 655-657 ◽  
pp. 1066-1076 ◽  
Author(s):  
Bo Zhu ◽  
Peng Yun Zhang ◽  
Jian Nan Chi ◽  
Tian Xia Zhang

A new gaze tracking method used in single camera gaze tracking system is proposed. The method can be divided into human face and eye location, human features detection and gaze parameters extraction, and ELM based gaze point estimation. In face and eye location, a face detection method which combines skin color model with Adaboost method is used for fast human face detection. In eye features and gaze parameters extraction, many image processing methods are used to detect eye features such as iris center, inner eye corner and so on. And then gaze parameter which is the vector from iris center to eye corner is obtained. After above an ELM based gaze point on the screen estimation method is proposed to establish the mapping relationship between gaze parameter and gaze point. The experimental results illustrate that the method in this paper is effective to do gaze estimation in single camera gaze tracking system.


2011 ◽  
Vol 31 (4) ◽  
pp. 0415002
Author(s):  
张琼 Zhang Qiong ◽  
王志良 Wang Zhiliang ◽  
迟健男 Chi Jiannan ◽  
史雪飞 Shi Xuefei

2017 ◽  
Vol 10 (4) ◽  
Author(s):  
Miika Toivanen ◽  
Kristian Lukander ◽  
Kai Puolamäki

This paper presents a method for computing the gaze point using camera data captured with a wearable gaze tracking device. The method utilizes a physical model of the human eye, advanced Bayesian computer vision algorithms, and Kalman filtering, resulting in high accuracy and low noise. Our C++ implementation can process camera streams with 30 frames per second in realtime. The performance of the system is validated in an exhaustive experimental setup with 19 participants, using a self-made device. Due to the used eye model and binocular cameras, the system is accurate for all distances and invariant to device movement. We also test our system against a best-in-class commercial device which is outperformed for spatial accuracy and precision. The software and hardware instructions as well as the experimental data are published as open source.


2010 ◽  
Vol 36 (8) ◽  
pp. 1051-1061 ◽  
Author(s):  
Chuang ZHANG ◽  
Jian-Nan CHI ◽  
Zhao-Hui ZHANG ◽  
Zhi-Liang WANG

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 35
Author(s):  
Jae-Min Shin ◽  
Yu-Sin Kim ◽  
Tae-Won Ban ◽  
Suna Choi ◽  
Kyu-Min Kang ◽  
...  

The need for drone traffic control management has emerged as the demand for drones increased. Particularly, in order to control unauthorized drones, the systems to detect and track drones have to be developed. In this paper, we propose the drone position tracking system using multiple Bluetooth low energy (BLE) receivers. The proposed system first estimates the target’s location, which consists of the distance and angle, while using the received signal strength indication (RSSI) signals at four BLE receivers and gradually tracks the target based on the estimated distance and angle. We propose two tracking algorithms, depending on the estimation method and also apply the memory process, improving the tracking performance by using stored previous movement information. We evaluate the proposed system’s performance in terms of the average number of movements that are required to track and the tracking success rate.


2020 ◽  
pp. 1-11
Author(s):  
Hui Wang ◽  
Huang Shiwang

The various parts of the traditional financial supervision and management system can no longer meet the current needs, and further improvement is urgently needed. In this paper, the low-frequency data is regarded as the missing of the high-frequency data, and the mixed frequency VAR model is adopted. In order to overcome the problems caused by too many parameters of the VAR model, this paper adopts the Bayesian estimation method based on the Minnesota prior to obtain the posterior distribution of each parameter of the VAR model. Moreover, this paper uses methods based on Kalman filtering and Kalman smoothing to obtain the posterior distribution of latent state variables. Then, according to the posterior distribution of the VAR model parameters and the posterior distribution of the latent state variables, this paper uses the Gibbs sampling method to obtain the mixed Bayes vector autoregressive model and the estimation of the state variables. Finally, this article studies the influence of Internet finance on monetary policy with examples. The research results show that the method proposed in this article has a certain effect.


2021 ◽  
Vol 11 (2) ◽  
pp. 851
Author(s):  
Wei-Liang Ou ◽  
Tzu-Ling Kuo ◽  
Chin-Chieh Chang ◽  
Chih-Peng Fan

In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform.


Sign in / Sign up

Export Citation Format

Share Document