Fast Monocular Depth Estimation via Side Prediction Aggregation with Continuous Spatial Refinement

2022 ◽  
pp. 1-1
Author(s):  
Jipeng Wu ◽  
Rongrong Ji ◽  
Qiang Wang ◽  
Shengchuan Zhang ◽  
Xiaoshuai Sun ◽  
...  
Author(s):  
Chih-Shuan Huang ◽  
Wan-Nung Tsung ◽  
Wei-Jong Yang ◽  
Chin-Hsing Chen

2021 ◽  
pp. 108116
Author(s):  
Shuai Li ◽  
Jiaying Shi ◽  
Wenfeng Song ◽  
Aimin Hao ◽  
Hong Qin

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 54
Author(s):  
Peng Liu ◽  
Zonghua Zhang ◽  
Zhaozong Meng ◽  
Nan Gao

Depth estimation is a crucial component in many 3D vision applications. Monocular depth estimation is gaining increasing interest due to flexible use and extremely low system requirements, but inherently ill-posed and ambiguous characteristics still cause unsatisfactory estimation results. This paper proposes a new deep convolutional neural network for monocular depth estimation. The network applies joint attention feature distillation and wavelet-based loss function to recover the depth information of a scene. Two improvements were achieved, compared with previous methods. First, we combined feature distillation and joint attention mechanisms to boost feature modulation discrimination. The network extracts hierarchical features using a progressive feature distillation and refinement strategy and aggregates features using a joint attention operation. Second, we adopted a wavelet-based loss function for network training, which improves loss function effectiveness by obtaining more structural details. The experimental results on challenging indoor and outdoor benchmark datasets verified the proposed method’s superiority compared with current state-of-the-art methods.


2021 ◽  
Vol 7 (4) ◽  
pp. 61
Author(s):  
David Urban ◽  
Alice Caplier

As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network’s ability to adapt to new sceneries with different types of obstacles with supervised learning.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Xin Yang ◽  
Qingling Chang ◽  
Xinglin Liu ◽  
Siyuan He ◽  
Yan Cui

2021 ◽  
Vol 11 (12) ◽  
pp. 5383
Author(s):  
Huachen Gao ◽  
Xiaoyu Liu ◽  
Meixia Qu ◽  
Shijie Huang

In recent studies, self-supervised learning methods have been explored for monocular depth estimation. They minimize the reconstruction loss of images instead of depth information as a supervised signal. However, existing methods usually assume that the corresponding points in different views should have the same color, which leads to unreliable unsupervised signals and ultimately damages the reconstruction loss during the training. Meanwhile, in the low texture region, it is unable to predict the disparity value of pixels correctly because of the small number of extracted features. To solve the above issues, we propose a network—PDANet—that integrates perceptual consistency and data augmentation consistency, which are more reliable unsupervised signals, into a regular unsupervised depth estimation model. Specifically, we apply a reliable data augmentation mechanism to minimize the loss of the disparity map generated by the original image and the augmented image, respectively, which will enhance the robustness of the image in the prediction of color fluctuation. At the same time, we aggregate the features of different layers extracted by a pre-trained VGG16 network to explore the higher-level perceptual differences between the input image and the generated one. Ablation studies demonstrate the effectiveness of each components, and PDANet shows high-quality depth estimation results on the KITTI benchmark, which optimizes the state-of-the-art method from 0.114 to 0.084, measured by absolute relative error for depth estimation.


2021 ◽  
Vol 28 ◽  
pp. 334-338
Author(s):  
Hong-Xiang Chen ◽  
Kunhong Li ◽  
Zhiheng Fu ◽  
Mengyi Liu ◽  
Zonghao Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document