scholarly journals A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion

2019 ◽  
Vol 9 (10) ◽  
pp. 2105 ◽  
Author(s):  
Guolai Jiang ◽  
Lei Yin ◽  
Shaokun Jin ◽  
Chaoran Tian ◽  
Xinbo Ma ◽  
...  

The method of simultaneous localization and mapping (SLAM) using a light detection and ranging (LiDAR) sensor is commonly adopted for robot navigation. However, consumer robots are price sensitive and often have to use low-cost sensors. Due to the poor performance of a low-cost LiDAR, error accumulates rapidly while SLAM, and it may cause a huge error for building a larger map. To cope with this problem, this paper proposes a new graph optimization-based SLAM framework through the combination of low-cost LiDAR sensor and vision sensor. In the SLAM framework, a new cost-function considering both scan and image data is proposed, and the Bag of Words (BoW) model with visual features is applied for loop close detection. A 2.5D map presenting both obstacles and vision features is also proposed, as well as a fast relocation method with the map. Experiments were taken on a service robot equipped with a 360° low-cost LiDAR and a front-view RGB-D camera in the real indoor scene. The results show that the proposed method has better performance than using LiDAR or camera only, while the relocation speed with our 2.5D map is much faster than with traditional grid map.

Author(s):  
Zewen Xu ◽  
Zheng Rong ◽  
Yihong Wu

AbstractIn recent years, simultaneous localization and mapping in dynamic environments (dynamic SLAM) has attracted significant attention from both academia and industry. Some pioneering work on this technique has expanded the potential of robotic applications. Compared to standard SLAM under the static world assumption, dynamic SLAM divides features into static and dynamic categories and leverages each type of feature properly. Therefore, dynamic SLAM can provide more robust localization for intelligent robots that operate in complex dynamic environments. Additionally, to meet the demands of some high-level tasks, dynamic SLAM can be integrated with multiple object tracking. This article presents a survey on dynamic SLAM from the perspective of feature choices. A discussion of the advantages and disadvantages of different visual features is provided in this article.


2017 ◽  
Vol 36 (12) ◽  
pp. 1363-1386 ◽  
Author(s):  
Patrick McGarey ◽  
Kirk MacTavish ◽  
François Pomerleau ◽  
Timothy D Barfoot

Tethered mobile robots are useful for exploration in steep, rugged, and dangerous terrain. A tether can provide a robot with robust communications, power, and mechanical support, but also constrains motion. In cluttered environments, the tether will wrap around a number of intermediate ‘anchor points’, complicating navigation. We show that by measuring the length of tether deployed and the bearing to the most recent anchor point, we can formulate a tethered simultaneous localization and mapping (TSLAM) problem that allows us to estimate the pose of the robot and the positions of the anchor points, using only low-cost, nonvisual sensors. This information is used by the robot to safely return along an outgoing trajectory while avoiding tether entanglement. We are motivated by TSLAM as a building block to aid conventional, camera, and laser-based approaches to simultaneous localization and mapping (SLAM), which tend to fail in dark and or dusty environments. Unlike conventional range-bearing SLAM, the TSLAM problem must account for the fact that the tether-length measurements are a function of the robot’s pose and all the intermediate anchor-point positions. While this fact has implications on the sparsity that can be exploited in our method, we show that a solution to the TSLAM problem can still be found and formulate two approaches: (i) an online particle filter based on FastSLAM and (ii) an efficient, offline batch solution. We demonstrate that either method outperforms odometry alone, both in simulation and in experiments using our TReX (Tethered Robotic eXplorer) mobile robot operating in flat-indoor and steep-outdoor environments. For the indoor experiment, we compare each method using the same dataset with ground truth, showing that batch TSLAM outperforms particle-filter TSLAM in localization and mapping accuracy, owing to superior anchor-point detection, data association, and outlier rejection.


Robotica ◽  
2007 ◽  
Vol 25 (2) ◽  
pp. 175-187 ◽  
Author(s):  
Staffan Ekvall ◽  
Danica Kragic ◽  
Patric Jensfelt

SUMMARYThe problem studied in this paper is a mobile robot that autonomously navigates in a domestic environment, builds a map as it moves along and localizes its position in it. In addition, the robot detects predefined objects, estimates their position in the environment and integrates this with the localization module to automatically put the objects in the generated map. Thus, we demonstrate one of the possible strategies for the integration of spatial and semantic knowledge in a service robot scenario where a simultaneous localization and mapping (SLAM) and object detection recognition system work in synergy to provide a richer representation of the environment than it would be possible with either of the methods alone. Most SLAM systems build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. The novelty is the augmentation of this process with an object-recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve a certain object from a certain room. We present the results of map building and an extensive evaluation of the object detection algorithm performed in an indoor setting.


Sensors ◽  
2017 ◽  
Vol 17 (4) ◽  
pp. 802 ◽  
Author(s):  
Elena López ◽  
Sergio García ◽  
Rafael Barea ◽  
Luis Bergasa ◽  
Eduardo Molinos ◽  
...  

Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2193 ◽  
Author(s):  
Xiao Chen ◽  
Weidong Hu ◽  
Lefeng Zhang ◽  
Zhiguang Shi ◽  
Maisi Li

2011 ◽  
Vol 366 ◽  
pp. 90-94
Author(s):  
Ying Min YI ◽  
Yu Hui

How to identify objects is a hot issue of robot simultaneous localization and mapping (SLAM) with monocular vision. In this paper, an algorithm of wheeled robot’s simultaneous localization and mapping with identification of landmarks based on monocular vision is proposed. In observation steps, identifying landmarks and locating position are performed by image processing and analyzing, which converts vision image projection of wheeled robots and geometrical relations of spatial objects into calculating robots’ relative landmarks distance and angle. The integral algorithm procedure follows the recursive order of prediction, observation, data association, update, mapping to have simultaneous localization and map building. Compared with Active Vision algorithm, Three dimensional vision and stereo vision algorithm, the proposed algorithm is able to identify environmental objects and conduct smooth movement as well.


2021 ◽  
Vol 13 (14) ◽  
pp. 2720
Author(s):  
Shoubin Chen ◽  
Baoding Zhou ◽  
Changhui Jiang ◽  
Weixing Xue ◽  
Qingquan Li

LiDAR (light detection and ranging), as an active sensor, is investigated in the simultaneous localization and mapping (SLAM) system. Typically, a LiDAR SLAM system consists of front-end odometry and back-end optimization modules. Loop closure detection and pose graph optimization are the key factors determining the performance of the LiDAR SLAM system. However, the LiDAR works at a single wavelength (905 nm), and few textures or visual features are extracted, which restricts the performance of point clouds matching based loop closure detection and graph optimization. With the aim of improving LiDAR SLAM performance, in this paper, we proposed a LiDAR and visual SLAM backend, which utilizes LiDAR geometry features and visual features to accomplish loop closure detection. Firstly, the bag of word (BoW) model, describing the visual similarities, was constructed to assist in the loop closure detection and, secondly, point clouds re-matching was conducted to verify the loop closure detection and accomplish graph optimization. Experiments with different datasets were carried out for assessing the proposed method, and the results demonstrated that the inclusion of the visual features effectively helped with the loop closure detection and improved LiDAR SLAM performance. In addition, the source code, which is open source, is available for download once you contact the corresponding author.


Sign in / Sign up

Export Citation Format

Share Document