Parameterization of Ambiguity in Monocular Depth Prediction

Author(s):  
Patrik Persson ◽  
Linn Ostrom ◽  
Carl Olsson ◽  
Kalle Astrom
Author(s):  
Zhaokai Wang ◽  
Limin Xiao ◽  
Rongbin Xu ◽  
Shubin Su ◽  
Shupan Li ◽  
...  

2020 ◽  
Vol 34 (07) ◽  
pp. 12257-12264 ◽  
Author(s):  
Xinlong Wang ◽  
Wei Yin ◽  
Tao Kong ◽  
Yuning Jiang ◽  
Lei Li ◽  
...  

Monocular depth estimation enables 3D perception from a single 2D image, thus attracting much research attention for years. Almost all methods treat foreground and background regions (“things and stuff”) in an image equally. However, not all pixels are equal. Depth of foreground objects plays a crucial role in 3D object recognition and localization. To date how to boost the depth prediction accuracy of foreground objects is rarely discussed. In this paper, we first analyze the data distributions and interaction of foreground and background, then propose the foreground-background separated monocular depth estimation (ForeSeE) method, to estimate the foreground and background depth using separate optimization objectives and decoders. Our method significantly improves the depth estimation performance on foreground objects. Applying ForeSeE to 3D object detection, we achieve 7.5 AP gains and set new state-of-the-art results among other monocular methods. Code will be available at: https://github.com/WXinlong/ForeSeE.


Author(s):  
Xiaotian Chen ◽  
Xuejin Chen ◽  
Zheng-Jun Zha

Monocular depth estimation is an essential task for scene understanding. The underlying structure of objects and stuff in a complex scene is critical to recovering accurate and visually-pleasing depth maps. Global structure conveys scene layouts, while local structure reflects shape details. Recently developed approaches based on convolutional neural networks (CNNs) significantly improve the performance of depth estimation. However, few of them take into account multi-scale structures in complex scenes. In this paper, we propose a Structure-Aware Residual Pyramid Network (SARPN) to exploit multi-scale structures for accurate depth prediction. We propose a Residual Pyramid Decoder (RPD) which expresses global scene structure in upper levels to represent layouts, and local structure in lower levels to present shape details. At each level, we propose Residual Refinement Modules (RRM) that predict residual maps to progressively add finer structures on the coarser structure predicted at the upper level. In order to fully exploit multi-scale image features, an Adaptive Dense Feature Fusion (ADFF) module, which adaptively fuses effective features from all scales for inferring structures of each scale, is introduced. Experiment results on the challenging NYU-Depth v2 dataset demonstrate that our proposed approach achieves state-of-the-art performance in both qualitative and quantitative evaluation. The code is available at https://github.com/Xt-Chen/SARPN.


2021 ◽  
Author(s):  
Zuria Bauer ◽  
Zuoyue Li ◽  
Sergio Orts-Escolano ◽  
Miguel Cazorla ◽  
Marc Pollefeys ◽  
...  

Author(s):  
Haosong Yue ◽  
Jinqing Zhang ◽  
Xingming Wu ◽  
Jianhua Wang ◽  
Weihai Chen

2019 ◽  
Vol 21 (11) ◽  
pp. 2701-2713 ◽  
Author(s):  
Xin Yang ◽  
Yang Gao ◽  
Hongcheng Luo ◽  
Chunyuan Liao ◽  
Kwang-Ting Cheng

Sign in / Sign up

Export Citation Format

Share Document