Attention Mechanism-based Monocular Depth Estimation and Visual Odometry

Author(s):  
Qieshi Zhang ◽  
Dian Lin ◽  
Ziliang Ren ◽  
Yuhang Kang ◽  
Fuxiang Wu ◽  
...  
Author(s):  
Lorenzo Andraghetti ◽  
Panteleimon Myriokefalitakis ◽  
Pier Luigi Dovesi ◽  
Belen Luque ◽  
Matteo Poggi ◽  
...  

2021 ◽  
Author(s):  
Xinyu Qi ◽  
Zhijun Fang ◽  
Shuqun Yang ◽  
Heng Zhou

Author(s):  
Mingkang Xiong ◽  
Zhenghong Zhang ◽  
Weilin Zhong ◽  
Jinsheng Ji ◽  
Jiyuan Liu ◽  
...  

The self-supervised learning-based depth and visual odometry (VO) estimators trained on monocular videos without ground truth have drawn significant attention recently. Prior works use photometric consistency as supervision, which is fragile under complex realistic environments due to illumination variations. More importantly, it suffers from scale inconsistency in the depth and pose estimation results. In this paper, robust geometric losses are proposed to deal with this problem. Specifically, we first align the scales of two reconstructed depth maps estimated from the adjacent image frames, and then enforce forward-backward relative pose consistency to formulate scale-consistent geometric constraints. Finally, a novel training framework is constructed to implement the proposed losses. Extensive evaluations on KITTI and Make3D datasets demonstrate that, i) by incorporating the proposed constraints as supervision, the depth estimation model can achieve state-of-the-art (SOTA) performance among the self-supervised methods, and ii) it is effective to use the proposed training framework to obtain a uniform global scale VO model.


Author(s):  
Chih-Shuan Huang ◽  
Wan-Nung Tsung ◽  
Wei-Jong Yang ◽  
Chin-Hsing Chen

Sign in / Sign up

Export Citation Format

Share Document