Comparison and Analysis of LIDAR-based SLAM Frameworks in Dynamic Environments with Moving Objects

Author(s):  
Jimyeong Woo ◽  
Hyein Jeong ◽  
Heoncheol Lee
Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3837 ◽  
Author(s):  
Junjie Zeng ◽  
Rusheng Ju ◽  
Long Qin ◽  
Yue Hu ◽  
Quanjun Yin ◽  
...  

In this paper, we propose a novel Deep Reinforcement Learning (DRL) algorithm which can navigate non-holonomic robots with continuous control in an unknown dynamic environment with moving obstacles. We call the approach MK-A3C (Memory and Knowledge-based Asynchronous Advantage Actor-Critic) for short. As its first component, MK-A3C builds a GRU-based memory neural network to enhance the robot’s capability for temporal reasoning. Robots without it tend to suffer from a lack of rationality in face of incomplete and noisy estimations for complex environments. Additionally, robots with certain memory ability endowed by MK-A3C can avoid local minima traps by estimating the environmental model. Secondly, MK-A3C combines the domain knowledge-based reward function and the transfer learning-based training task architecture, which can solve the non-convergence policies problems caused by sparse reward. These improvements of MK-A3C can efficiently navigate robots in unknown dynamic environments, and satisfy kinetic constraints while handling moving objects. Simulation experiments show that compared with existing methods, MK-A3C can realize successful robotic navigation in unknown and challenging environments by outputting continuous acceleration commands.


Robotica ◽  
2019 ◽  
Vol 38 (2) ◽  
pp. 256-270 ◽  
Author(s):  
Jiyu Cheng ◽  
Yuxiang Sun ◽  
Max Q.-H. Meng

SummaryVisual simultaneous localization and mapping (visual SLAM) has been well developed in recent decades. To facilitate tasks such as path planning and exploration, traditional visual SLAM systems usually provide mobile robots with the geometric map, which overlooks the semantic information. To address this problem, inspired by the recent success of the deep neural network, we combine it with the visual SLAM system to conduct semantic mapping. Both the geometric and semantic information will be projected into the 3D space for generating a 3D semantic map. We also use an optical-flow-based method to deal with the moving objects such that our method is capable of working robustly in dynamic environments. We have performed our experiments in the public TUM dataset and our recorded office dataset. Experimental results demonstrate the feasibility and impressive performance of the proposed method.


2021 ◽  
Vol 11 (9) ◽  
pp. 3938
Author(s):  
Shusheng Bi ◽  
Chang Yuan ◽  
Chang Liu ◽  
Jun Cheng ◽  
Wei Wang ◽  
...  

By moving a commercial 2D LiDAR, 3D maps of the environment can be built, based on the data of a 2D LiDAR and its movements. Compared to a commercial 3D LiDAR, a moving 2D LiDAR is more economical. A series of problems need to be solved in order for a moving 2D LiDAR to perform better, among them, improving accuracy and real-time performance. In order to solve these problems, estimating the movements of a 2D LiDAR, and identifying and removing moving objects in the environment, are issues that should be studied. More specifically, calibrating the installation error between the 2D LiDAR and the moving unit, the movement estimation of the moving unit, and identifying moving objects at low scanning frequencies, are involved. As actual applications are mostly dynamic, and in these applications, a moving 2D LiDAR moves between multiple moving objects, we believe that, for a moving 2D LiDAR, how to accurately construct 3D maps in dynamic environments will be an important future research topic. Moreover, how to deal with moving objects in a dynamic environment via a moving 2D LiDAR has not been solved by previous research.


2021 ◽  
Vol 11 (2) ◽  
pp. 645
Author(s):  
Xujie Kang ◽  
Jing Li ◽  
Xiangtao Fan ◽  
Hongdeng Jian ◽  
Chen Xu

Visual simultaneous localization and mapping (SLAM) is challenging in dynamic environments as moving objects can impair camera pose tracking and mapping. This paper introduces a method for robust dense bject-level SLAM in dynamic environments that takes a live stream of RGB-D frame data as input, detects moving objects, and segments the scene into different objects while simultaneously tracking and reconstructing their 3D structures. This approach provides a new method of dynamic object detection, which integrates prior knowledge of the object model database constructed, object-oriented 3D tracking against the camera pose, and the association between the instance segmentation results on the current frame data and an object database to find dynamic objects in the current frame. By leveraging the 3D static model for frame-to-model alignment, as well as dynamic object culling, the camera motion estimation reduced the overall drift. According to the camera pose accuracy and instance segmentation results, an object-level semantic map representation was constructed for the world map. The experimental results obtained using the TUM RGB-D dataset, which compares the proposed method to the related state-of-the-art approaches, demonstrating that our method achieves similar performance in static scenes and improved accuracy and robustness in dynamic scenes.


2018 ◽  
Vol 8 (12) ◽  
pp. 2534 ◽  
Author(s):  
Zhongli Wang ◽  
Yan Chen ◽  
Yue Mei ◽  
Kuo Yang ◽  
Baigen Cai

Generally, the key issues of 2D LiDAR-based simultaneous localization and mapping (SLAM) for indoor application include data association (DA) and closed-loop detection. Particularly, a low-texture environment, which refers to no obvious changes between two consecutive scanning outputs, with moving objects existing in the environment will bring great challenges on DA and the closed-loop detection, and the accuracy and consistency of SLAM may be badly affected. There is not much literature that addresses this issue. In this paper, a mapping strategy is firstly exploited to improve the performance of the 2D SLAM in dynamic environments. Secondly, a fusion method which combines the IMU sensor with a 2D LiDAR, based on framework of extended Kalman Filter (EKF), is proposed to enhance the performance under low-texture environments. In the front-end of the proposed SLAM method, initial motion estimation is obtained from the output of EKF, and it can be taken as the initial pose for the scan matching problem. Then the scan matching problem can be optimized by the Levenberg–Marquardt (LM) algorithm. For the back-end optimization, a sparse pose adjustment (SPA) method is employed. To improve the accuracy, the grid map is updated with the bicubic interpolation method for derivative computing. With the improvements both in the DA process and the back-end optimization stage, the accuracy and consistency of SLAM results in low-texture environments is enhanced. Qualitative and quantitative experiments with open-loop and closed-loop cases have been conducted and the results are analyzed, confirming that the proposed method is effective in low-texture and dynamic indoor environments.


Author(s):  
P. Irmisch ◽  
D. Baumbach ◽  
I. Ernst

Abstract. Camera based navigation in dynamic environments with high content of moving objects is challenging. Keypoint-based localization methods need to reliably reject features that do not belong to the static background. Here, traditional statistical methods for outlier rejection quickly reach their limits. A common approach is the combination with an inertial measurement unit for visual-inertial odometry. Also, deep learning based semantic segmentation was recently successfully applied in camera based localization to identify features on common objects. In this work, we study the application of mask-based feature selection based on semantic segmentation for robust localization in high dynamic environments. We focus on visual-inertial odometry, but similarly investigate a state-of-the-art pure vision-based method as baseline. For a versatile evaluation, we use challenging self-recorded datasets based on different sensor systems. This includes a combined dataset of a real world system and its synthetic clone with a large number of humans for in-depth analysis. We further deploy large-scale datasets from pedestrian navigation in a mall with escalator scenes and vehicle navigation during the day and at night. Our results show that visual-inertial odometry performs generally well in dynamic environments itself, but also shows significant failures in challenging scenes, which are prevented by using the segmentation aid.


Author(s):  
C. Li ◽  
Z. Kang ◽  
J. Yang ◽  
F. Li ◽  
Y. Wang

Abstract. Visual Simultaneous Localization and Mapping (SLAM) systems have been widely investigated in response to requirements, since the traditional positioning technology, such as Global Navigation Satellite System (GNSS), cannot accomplish tasks in restricted environments. However, traditional SLAM methods which are mostly based on point feature tracking, usually fail in harsh environments. Previous works have proven that insufficient feature points caused by missing textures, feature mismatches caused by too fast camera movements, and abrupt illumination changes will eventually cause state estimation to fail. And meanwhile, pedestrians are unavoidable, which introduces fake feature associations, thus violating the strict assumption that the unknown environment is static in SLAM. In order to ensure how our system copes with the huge challenges brought by these factors in a complex indoor environment, this paper proposes a semantic-assisted Visual Inertial Odometer (VIO) system towards low-textured scenes and highly dynamic environments. The trained U-net will be used to detect moving objects. Then all feature points in the dynamic object area need to be eliminated, so as to avoid moving objects to participate in the pose solution process and improve robustness in dynamic environments. Finally, the constraints of inertial measurement unit (IMU) are added for low-textured environments. To evaluate the performance of the proposed method, experiments were conducted on the EuRoC and TUM public dataset, and the results demonstrate that the performance of our approach is robust in complex indoor environments.


Sign in / Sign up

Export Citation Format

Share Document