Monocular depth estimation based on unsupervised learning

Author(s):  
Wan Liu ◽  
Yan Sun ◽  
XuCheng Wang ◽  
Lin Yang ◽  
Zhenrong Zheng
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 148142-148151
Author(s):  
Delong Yang ◽  
Xunyu Zhong ◽  
Lixiong Lin ◽  
Xiafu Peng

Author(s):  
Chih-Shuan Huang ◽  
Wan-Nung Tsung ◽  
Wei-Jong Yang ◽  
Chin-Hsing Chen

2019 ◽  
Vol 39 (2) ◽  
pp. 543-570 ◽  
Author(s):  
Mingyang Geng ◽  
Suning Shang ◽  
Bo Ding ◽  
Huaimin Wang ◽  
Pengfei Zhang

2021 ◽  
pp. 108116
Author(s):  
Shuai Li ◽  
Jiaying Shi ◽  
Wenfeng Song ◽  
Aimin Hao ◽  
Hong Qin

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 54
Author(s):  
Peng Liu ◽  
Zonghua Zhang ◽  
Zhaozong Meng ◽  
Nan Gao

Depth estimation is a crucial component in many 3D vision applications. Monocular depth estimation is gaining increasing interest due to flexible use and extremely low system requirements, but inherently ill-posed and ambiguous characteristics still cause unsatisfactory estimation results. This paper proposes a new deep convolutional neural network for monocular depth estimation. The network applies joint attention feature distillation and wavelet-based loss function to recover the depth information of a scene. Two improvements were achieved, compared with previous methods. First, we combined feature distillation and joint attention mechanisms to boost feature modulation discrimination. The network extracts hierarchical features using a progressive feature distillation and refinement strategy and aggregates features using a joint attention operation. Second, we adopted a wavelet-based loss function for network training, which improves loss function effectiveness by obtaining more structural details. The experimental results on challenging indoor and outdoor benchmark datasets verified the proposed method’s superiority compared with current state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document