Estimating a Driver's Gaze Point by a Remote Spherical Camera

Author(s):  
Zihao Zhao ◽  
Shigang Li ◽  
Takahiro Kosaki
Keyword(s):  
Author(s):  
J. Y. Rau ◽  
B. W. Su ◽  
K. W. Hsiao ◽  
J. P. Jhan

A spherical camera can observe the environment for almost 720 degrees’ field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera’s original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4128 ◽  
Author(s):  
Irem Uygur ◽  
Renato Miyagusuku ◽  
Sarthak Pathak ◽  
Alessandro Moro ◽  
Atsushi Yamashita ◽  
...  

Self-localization enables a system to navigate and interact with its environment. In this study, we propose a novel sparse semantic self-localization approach for robust and efficient indoor localization. “Sparse semantic” refers to the detection of sparsely distributed objects such as doors and windows. We use sparse semantic information to self-localize on a human-readable 2D annotated map in the sensor model. Thus, compared to previous works using point clouds or other dense and large data structures, our work uses a small amount of sparse semantic information, which efficiently reduces uncertainty in real-time localization. Unlike complex 3D constructions, the annotated map required by our method can be easily prepared by marking the approximate centers of the annotated objects on a 2D map. Our approach is robust to the partial obstruction of views and geometrical errors on the map. The localization is performed using low-cost lightweight sensors, an inertial measurement unit and a spherical camera. We conducted experiments to show the feasibility and robustness of our approach.


2015 ◽  
Author(s):  
Christiano Couto Gava ◽  
Bernd Krolla ◽  
Didier Stricker

2020 ◽  
Vol 86 (12) ◽  
pp. 1014-1019
Author(s):  
Dongxu YANG ◽  
Hiroshi HIGUCHI ◽  
Sarthak PATHAK ◽  
Alessandro MORO ◽  
Atsushi YAMASHITA ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document