end depth
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 5)

H-INDEX

11
(FIVE YEARS 1)

Author(s):  
Ming Yin

Estimating the depth of the scene from a monocular image is an essential step for image semantic understanding. Practically, some existing methods for this highly ill-posed issue are still in lack of robustness and efficiency. This paper proposes a novel end-to-end depth esti- mation model with skip connections from a pre- trained Xception model for dense feature extrac- tion, and three new modules are designed to im- prove the upsampling process. In addition, ELU activation and convolutions with smaller kernel size are added to improve the pixel-wise regres- sion process. The experimental results show that our model has fewer network parameters, a lower error rate than the most advanced networks and requires only half the training time. The evalu- ation is based on the NYU v2 dataset, and our proposed model can achieve clearer boundary de- tails with state-of-the-art effects and robustness.


2020 ◽  
Vol 170 ◽  
pp. 105283 ◽  
Author(s):  
Payam Khosravinia ◽  
Mohammad Reza Nikpour ◽  
Ozgur Kisi ◽  
Zaher Mundher Yaseen

2019 ◽  
Vol 145 (12) ◽  
pp. 06019011 ◽  
Author(s):  
Shubing Dai ◽  
Yulei Ma ◽  
Sheng Jin
Keyword(s):  

2018 ◽  
Vol 83 ◽  
pp. 430-442 ◽  
Author(s):  
Zhenyu Zhang ◽  
Chunyan Xu ◽  
Jian Yang ◽  
Ying Tai ◽  
Liang Chen

Author(s):  
C. Pinard ◽  
L. Chevalley ◽  
A. Manzanera ◽  
D. Filliat

We propose a depth map inference system from monocular videos based on a novel dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Unlike most navigation datasets, the lack of rotation implies an easier structure from motion problem which can be leveraged for different kinds of tasks such as depth inference and obstacle avoidance. We also propose an architecture for end-to-end depth inference with a fully convolutional network. Results show that although tied to camera inner parameters, the problem is locally solvable and leads to good quality depth prediction.


Sign in / Sign up

Export Citation Format

Share Document