Unsupervised Depth Estimation from Monocular Video based on Relative Motion

Author(s):  
Hui Cao ◽  
Chao Wang ◽  
Ping Wang ◽  
Qingquan Zou ◽  
Xiao Xiao
2021 ◽  
Vol 8 (3) ◽  
pp. 15-27
Author(s):  
Mohamed N. Sweilam ◽  
Nikolay Tolstokulakov

Depth estimation has made great progress in the last few years due to its applications in robotics science and computer vision. Various methods have been implemented and enhanced to estimate the depth without flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers, especially for the video applications which have more complexity of the neural network which af ects the run time. Moreover to use such input like monocular video for depth estimation is considered an attractive idea, particularly for hand-held devices such as mobile phones, they are very popular for capturing pictures and videos, in addition to having a limited amount of RAM. Here in this work, we focus on enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of RAM and with using less number of parameters without having a significant reduction in the quality of the depth estimation.


2020 ◽  
Vol 6 ◽  
pp. e317
Author(s):  
Dmitrii Maslov ◽  
Ilya Makarov

Autonomous driving highly depends on depth information for safe driving. Recently, major improvements have been taken towards improving both supervised and self-supervised methods for depth reconstruction. However, most of the current approaches focus on single frame depth estimation, where quality limit is hard to beat due to limitations of supervised learning of deep neural networks in general. One of the way to improve quality of existing methods is to utilize temporal information from frame sequences. In this paper, we study intelligent ways of integrating recurrent block in common supervised depth estimation pipeline. We propose a novel method, which takes advantage of the convolutional gated recurrent unit (convGRU) and convolutional long short-term memory (convLSTM). We compare use of convGRU and convLSTM blocks and determine the best model for real-time depth estimation task. We carefully study training strategy and provide new deep neural networks architectures for the task of depth estimation from monocular video using information from past frames based on attention mechanism. We demonstrate the efficiency of exploiting temporal information by comparing our best recurrent method with existing image-based and video-based solutions for monocular depth reconstruction.


Author(s):  
Andreas Wedel ◽  
Uwe Franke ◽  
Jens Klappstein ◽  
Thomas Brox ◽  
Daniel Cremers

2020 ◽  
Vol 5 (4) ◽  
pp. 6813-6820 ◽  
Author(s):  
Vaishakh Patil ◽  
Wouter Van Gansbeke ◽  
Dengxin Dai ◽  
Luc Van Gool

2021 ◽  
Author(s):  
Mohamed N. Sweilam ◽  
Nikolay Tolstokulakov

Depth estimation has made great progress in the last few years due to its applications in robotics science and computer vision. Various methods have been developed and implemented to estimate the depth, without flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers, especially for the video applications which have more difficulties such as the complexity of the neural network which affects the run time. Moreover to use such input like monocular video for depth estimation is considered an attractive idea, particularly for hand-held devices such as mobile phones, nowadays they are very popular for capturing pictures and videos. Here in this work, we focus on enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of memory and with using less number of parameters without having a significant reduction in the quality of the depth estimation.


Author(s):  
Bridget Carragher ◽  
David A. Bluemke ◽  
Michael J. Potel ◽  
Robert Josephs

We have investigated the feasibility of restoring blurred electron micrographs. Two related problems have been considered; the restoration of images blurred as a result of relative motion between the specimen and the image plane, and the restoration of images which are rotationally blurred about an axis. Micrographs taken while the specimen is drifting result in images which are blurred in the direction of motion. An example of rotational blurring arises in micrographs of thin sections of helical particles viewed in cross section. The twist of the particle within the finite thickness of the section causes the image to appear rotationally blurred about the helical axis. As a result, structural details, particularly at large distances from the helical axis, will be obscured.


Sign in / Sign up

Export Citation Format

Share Document