scholarly journals A Framework for Depth Estimation and Relative Localization of Ground Robots using Computer Vision

Author(s):  
Romulo T. Rodrigues ◽  
Pedro Miraldo ◽  
Dimos V. Dimarogonas ◽  
A. Pedro Aguiar
2021 ◽  
Vol 8 (3) ◽  
pp. 15-27
Author(s):  
Mohamed N. Sweilam ◽  
Nikolay Tolstokulakov

Depth estimation has made great progress in the last few years due to its applications in robotics science and computer vision. Various methods have been implemented and enhanced to estimate the depth without flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers, especially for the video applications which have more complexity of the neural network which af ects the run time. Moreover to use such input like monocular video for depth estimation is considered an attractive idea, particularly for hand-held devices such as mobile phones, they are very popular for capturing pictures and videos, in addition to having a limited amount of RAM. Here in this work, we focus on enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of RAM and with using less number of parameters without having a significant reduction in the quality of the depth estimation.


2020 ◽  
Author(s):  
Chih-Shuan Huang ◽  
Ya-Han Huang ◽  
Din-Yuen Chan ◽  
Jar-Ferr Yang

Abstract Stereo matching is one of the most important topics in computer vision and aims at generating precise depth maps for various smart applications. The major challenge of stereo matching is to suppress inevitable errors occurring in smooth, occluded and discontinuous regions. In this paper, we propose a robust stereo matching system, which is based on segment-based superpixels, to design adaptive matching computation and dual-path refinement. After the selection of matching costs, we suggest the segment-based adaptive support weights for cost aggregation, instead of color similarity and spatial proximity, to achieve precise depth estimation. Then, the proposed dual-path depth refinement, which refers the texture features in a cross-based support region, corrects the inaccurate disparities to successively refine the depth maps with shape reserving. Specially for left-most and right most regions, the segment-based refinement can greatly improve the mismatched disparity holes. The experimental results show that the proposed system achieves higher accurate depth maps than the conventional stereo matching methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jiadi Cui ◽  
Lei Jin ◽  
Haofei Kuang ◽  
Qingwen Xu ◽  
Sören Schwertfeger

This paper proposes a method for monocular underwater depth estimation, which is an open problem in robotics and computer vision. To this end, we leverage publicly available in-air RGB-D image pairs for underwater depth estimation in the spherical domain with an unsupervised approach. For this, the in-air images are style-transferred to the underwater style as the first step. Given those synthetic underwater images and their ground truth depth, we then train a network to estimate the depth. This way, our learning model is designed to obtain the depth up to scale, without the need of corresponding ground truth underwater depth data, which is typically not available. We test our approach on style-transferred in-air images as well as on our own real underwater dataset, for which we computed sparse ground truth depths data via stereopsis. This dataset is provided for download. Experiments with this data against a state-of-the-art in-air network as well as different artificial inputs show that the style transfer as well as the depth estimation exhibit promising performance.


2019 ◽  
Vol 11 (17) ◽  
pp. 1990 ◽  
Author(s):  
Mostafa Mansour ◽  
Pavel Davidson ◽  
Oleg Stepanov ◽  
Robert Piché

Binocular disparity and motion parallax are the most important cues for depth estimation in human and computer vision. Here, we present an experimental study to evaluate the accuracy of these two cues in depth estimation to stationary objects in a static environment. Depth estimation via binocular disparity is most commonly implemented using stereo vision, which uses images from two or more cameras to triangulate and estimate distances. We use a commercial stereo camera mounted on a wheeled robot to create a depth map of the environment. The sequence of images obtained by one of these two cameras as well as the camera motion parameters serve as the input to our motion parallax-based depth estimation algorithm. The measured camera motion parameters include translational and angular velocities. Reference distance to the tracked features is provided by a LiDAR. Overall, our results show that at short distances stereo vision is more accurate, but at large distances the combination of parallax and camera motion provide better depth estimation. Therefore, by combining the two cues, one obtains depth estimation with greater range than is possible using either cue individually.


Author(s):  
Shubhada Mone ◽  
Nihar Salunke ◽  
Omkar Jadhav ◽  
Arjun Barge ◽  
Nikhil Magar

With the easy availability of technology, smartphones are playing an important role in every person’s life. Also, with the advancements in computer vision based research, Automatic Driving cars, Object Recognition, Depth Map Prediction, Object Distance Estimation, have reached commendable levels of intelligence and accuracy. Combining the research and technological advancements, we can be hopeful in creating a computer vision based mobile-application which will help guide visually disabled people in performing their day to day tasks with easily available mobile applications. With our study, the visually disabled can perform simple tasks like outdoor/indoor navigation without encountering obstacles, also they can avoid accidental collisions with objects in their surroundings. Currently, there are very few applications which provide the same assistance to the visually impaired. Using physical tools like sticks is a very common practice when it comes to avoiding obstacles in a visually disabled person’s path. Our study will be focused on object detection and depth estimation techniques- two of the most popular and advanced fields in Intelligent Computer vision studies. We have explored more on the traditional challenges and future hopes of incorporating these techniques on embedded devices.


2021 ◽  
Author(s):  
Spiros Jason Hippolyte

The means to track objects in 3D space is paramount to computer vision and robotics. Improving upon prior work of the M.A.R.S. project enabled more accurate object tracking and ranging, required investigation into current techniques of stereo depth estimation, object tracking algorithms and the use of FPGA platforms. The research focused on aviation, ground vehicle and robotic applications of stereo computer vision and image processing methods. The implementation of the project design focused on how to obtain greater disparity resolution from the stereo system while minimizing memory resources. The analysis of the optimal method and then the coding and debugging of the optimal solution was performed to insure inter-operability with the existing system and lay the foundation for further expansion of the system. Comparative analysis of Xilinx FPGA platforms and MATLAB simulation of the concept provided data on hardware resources, improved disparity output and the minimal use of memory.


2021 ◽  
Author(s):  
Mohamed N. Sweilam ◽  
Nikolay Tolstokulakov

Depth estimation has made great progress in the last few years due to its applications in robotics science and computer vision. Various methods have been developed and implemented to estimate the depth, without flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers, especially for the video applications which have more difficulties such as the complexity of the neural network which affects the run time. Moreover to use such input like monocular video for depth estimation is considered an attractive idea, particularly for hand-held devices such as mobile phones, nowadays they are very popular for capturing pictures and videos. Here in this work, we focus on enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of memory and with using less number of parameters without having a significant reduction in the quality of the depth estimation.


Sign in / Sign up

Export Citation Format

Share Document