scholarly journals Gaze estimation via a differential eyes’ appearances network with a reference grid

Engineering ◽  
2021 ◽  
Author(s):  
Song Gu ◽  
Lihui Wang ◽  
Long He ◽  
Xianding He ◽  
Jian Wang
2018 ◽  
Vol 14 (2) ◽  
pp. 153-173 ◽  
Author(s):  
Jumana Waleed ◽  
◽  
Taha Mohammed Hasan ◽  
Qutaiba Kadhim Abed

1983 ◽  
Vol 4 ◽  
pp. 297-297
Author(s):  
G. Brugnot

We consider the paper by Brugnot and Pochat (1981), which describes a one-dimensional model applied to a snow avalanche. The main advance made here is the introduction of the second dimension in the runout zone. Indeed, in the channelled course, we still use the one-dimensional model, but, when the avalanche spreads before stopping, we apply a (x, y) grid on the ground and six equations have to be solved: (1) for the avalanche body, one equation for continuity and two equations for momentum conservation, and (2) at the front, one equation for continuity and two equations for momentum conservation. We suppose the front to be a mobile jump, with longitudinal velocity varying more rapidly than transverse velocity.We solve these equations by a finite difference method. This involves many topological problems, due to the actual position of the front, which is defined by its intersection with the reference grid (SI, YJ). In the near future our two directions of research will be testing the code on actual avalanches and improving it by trying to make it cheaper without impairing its accuracy.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


2017 ◽  
Vol 124 (2) ◽  
pp. 223-236 ◽  
Author(s):  
Fares Alnajar ◽  
Theo Gevers ◽  
Roberto Valenti ◽  
Sennay Ghebreab

Author(s):  
Takashi Nagamatsu ◽  
Yukina Iwamoto ◽  
Junzo Kamahara ◽  
Naoki Tanaka ◽  
Michiya Yamamoto

Author(s):  
Gang Liu ◽  
Yu Yu ◽  
Kenneth Alberto Funes Mora ◽  
Jean-Marc Odobez

Sign in / Sign up

Export Citation Format

Share Document