MonoER - A Edge Refined Self-Supervised Monocular Depth Estimation Method

Author(s):  
Tianyu Xiang ◽  
Lingzhe Zhao ◽  
Hao Zhang ◽  
Zhuping Wang
2021 ◽  
Vol 58 (6) ◽  
pp. 0615005
Author(s):  
郭克友 Guo Keyou ◽  
杨民 Yang Min ◽  
张沫 Zhang Mo ◽  
郭晓丽 Guo Xiaoli ◽  
李雪 Li Xue

2019 ◽  
Vol 58 (34) ◽  
pp. G52
Author(s):  
Sungwon Choi ◽  
Sung-wook Min

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4856 ◽  
Author(s):  
Kang Huang ◽  
Xingtian Qu ◽  
Shouqian Chen ◽  
Zhen Chen ◽  
Wang Zhang ◽  
...  

Accurately sensing the surrounding 3D scene is indispensable for drones or robots to execute path planning and navigation. In this paper, a novel monocular depth estimation method was proposed that primarily utilizes a lighter-weight Convolutional Neural Network (CNN) structure for coarse depth prediction and then refines the coarse depth images by combining surface normal guidance. Specifically, the coarse depth prediction network is designed as pre-trained encoder–decoder architecture for describing the 3D structure. When it comes to surface normal estimation, the deep learning network was designed as a two-stream encoder–decoder structure, which hierarchically merges red-green-blue-depth (RGB-D) images for capturing more accurate geometric boundaries. Relying on fewer network parameters and simpler learning structure, better detailed depth maps are produced than the existing states. Moreover, 3D point cloud maps reconstructed from depth prediction images confirm that our framework can be conveniently adopted as components of a monocular simultaneous localization and mapping (SLAM) paradigm.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5389
Author(s):  
Chuanxue Song ◽  
Chunyang Qi ◽  
Shixin Song ◽  
Feng Xiao

Depth estimation of a single image presents a classic problem for computer vision, and is important for the 3D reconstruction of scenes, augmented reality, and object detection. At present, most researchers are beginning to focus on unsupervised monocular depth estimation. This paper proposes solutions to the current depth estimation problem. These solutions include a monocular depth estimation method based on uncertainty analysis, which solves the problem in which a neural network has strong expressive ability but cannot evaluate the reliability of an output result. In addition, this paper proposes a photometric loss function based on the Retinex algorithm, which solves the problem of pulling around pixels due to the presence of moving objects. We objectively compare our method to current mainstream monocular depth estimation methods and obtain satisfactory results.


Author(s):  
Chih-Shuan Huang ◽  
Wan-Nung Tsung ◽  
Wei-Jong Yang ◽  
Chin-Hsing Chen

2021 ◽  
pp. 108116
Author(s):  
Shuai Li ◽  
Jiaying Shi ◽  
Wenfeng Song ◽  
Aimin Hao ◽  
Hong Qin

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 54
Author(s):  
Peng Liu ◽  
Zonghua Zhang ◽  
Zhaozong Meng ◽  
Nan Gao

Depth estimation is a crucial component in many 3D vision applications. Monocular depth estimation is gaining increasing interest due to flexible use and extremely low system requirements, but inherently ill-posed and ambiguous characteristics still cause unsatisfactory estimation results. This paper proposes a new deep convolutional neural network for monocular depth estimation. The network applies joint attention feature distillation and wavelet-based loss function to recover the depth information of a scene. Two improvements were achieved, compared with previous methods. First, we combined feature distillation and joint attention mechanisms to boost feature modulation discrimination. The network extracts hierarchical features using a progressive feature distillation and refinement strategy and aggregates features using a joint attention operation. Second, we adopted a wavelet-based loss function for network training, which improves loss function effectiveness by obtaining more structural details. The experimental results on challenging indoor and outdoor benchmark datasets verified the proposed method’s superiority compared with current state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document