scholarly journals Localization and Mapping for Autonomous Navigation in Outdoor Terrains : A Stereo Vision Approach

Author(s):  
Motilal Agrawal ◽  
Kurt Konolige ◽  
Robert Bolles
Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


Robotica ◽  
2018 ◽  
Vol 36 (8) ◽  
pp. 1225-1243 ◽  
Author(s):  
Jose-Pablo Sanchez-Rodriguez ◽  
Alejandro Aceves-Lopez

SUMMARYThis paper presents an overview of the most recent vision-based multi-rotor micro unmanned aerial vehicles (MUAVs) intended for autonomous navigation using a stereoscopic camera. Drone operation is difficult because pilots need the expertise to fly the drones. Pilots have a limited field of view, and unfortunate situations, such as loss of line of sight or collision with objects such as wires and branches, can happen. Autonomous navigation is an even more difficult challenge than remote control navigation because the drones must make decisions on their own in real time and simultaneously build maps of their surroundings if none is available. Moreover, MUAVs are limited in terms of useful payload capability and energy consumption. Therefore, a drone must be equipped with small sensors, and it must carry low weight. In addition, a drone requires a sufficiently powerful onboard computer so that it can understand its surroundings and navigate accordingly to achieve its goal safely. A stereoscopic camera is considered a suitable sensor because of its three-dimensional (3D) capabilities. Hence, a drone can perform vision-based navigation through object recognition and self-localise inside a map if one is available; otherwise, its autonomous navigation creates a simultaneous localisation and mapping problem.


Author(s):  
Abouzahir Mohamed ◽  
Elouardi Abdelhafid ◽  
Bouaziz Samir ◽  
Latif Rachid ◽  
Tajer Abdelouahed

The improved particle filter based simultaneous localization and mapping (SLAM) has been developed for many robotic applications. The main purpose of this article is to demonstrate that recent heterogeneous architectures can be used to implement the FastSLAM2.0 and can greatly help to design embedded systems based robot applications and autonomous navigation. The algorithm is studied, optimized and evaluated with a real dataset using different sensors data and a hardware in the loop (HIL) method. Authors have implemented the algorithm on a system based embedded applications. Results demonstrate that an optimized FastSLAM2.0 algorithm provides a consistent localization according to a reference. Such systems are suitable for real time SLAM applications.


2020 ◽  
Vol 171 ◽  
pp. 207-216
Author(s):  
Namburi GNVV Satya Sai Srinath ◽  
Athul Zac Joseph ◽  
S Umamaheswaran ◽  
Ch. Lakshmi Priyanka ◽  
Malavika Nair M ◽  
...  

2020 ◽  
Vol 42 (16) ◽  
pp. 3243-3253 ◽  
Author(s):  
Xiangzhu Zhang ◽  
Lijia Zhang ◽  
Hailong Pei ◽  
Frank L. Lewis

Two common methods exist for solving indoor autonomous navigation and obstacle-avoidance problems using monocular vision: the traditional simultaneous localization and mapping (SLAM) method, which requires complex hardware, heavy calculations, and is prone to errors in low texture or dynamic environments; and deep-learning algorithms, which use the fully connected layer for classification or regression, resulting in more model parameters and easy over-fitting. Among the latter ones, the most advanced indoor navigation algorithm divides a single image frame into multiple parts for prediction, resulting in doubled reasoning time. To solve these problems, we propose a multi-task deep network based on feature map region division for monocular indoor autonomous navigation. We divide the feature map instead of the original image to avoid repeated information processing. To reduce model parameters, we use convolution instead of the fully connected layer to predict the navigable probability of the left, middle, and right parts. We propose that the linear velocity is determined by combining three prediction probabilities to reduce collision risk. Experimental evaluation shows that the proposed method is nine times smaller than the previous state-of-the-art methods; further, its processing speed and navigation capability increase more than five and 1.6 times, respectively.


Sign in / Sign up

Export Citation Format

Share Document