scholarly journals Map Construction Based on LiDAR Vision Inertial Multi-Sensor Fusion

2021 ◽  
Vol 12 (4) ◽  
pp. 261
Author(s):  
Chuanwei Zhang ◽  
Lei Lei ◽  
Xiaowen Ma ◽  
Rui Zhou ◽  
Zhenghe Shi ◽  
...  

In order to make up for the shortcomings of independent sensors and provide more reliable estimation, a multi-sensor fusion framework for simultaneous localization and mapping is proposed in this paper. Firstly, the light detection and ranging (LiDAR) point cloud is screened in the front-end processing to eliminate abnormal points and improve the positioning and mapping accuracy. Secondly, for the problem of false detection when the LiDAR is surrounded by repeated structures, the intensity value of the laser point cloud is used as the screening condition to screen out robust visual features with high distance confidence, for the purpose of softening. Then, the initial factor, registration factor, inertial measurement units (IMU) factor and loop factor are inserted into the factor graph. A factor graph optimization algorithm based on a Bayesian tree is used for incremental optimization estimation to realize the data fusion. The algorithm was tested in campus and real road environments. The experimental results show that the proposed algorithm can realize state estimation and map construction with high accuracy and strong robustness.

2019 ◽  
Vol 9 (16) ◽  
pp. 3264 ◽  
Author(s):  
Xujie Kang ◽  
Jing Li ◽  
Xiangtao Fan ◽  
Wenhui Wan

In recent years, low-cost and lightweight RGB and depth (RGB-D) sensors, such as Microsoft Kinect, have made available rich image and depth data, making them very popular in the field of simultaneous localization and mapping (SLAM), which has been increasingly used in robotics, self-driving vehicles, and augmented reality. The RGB-D SLAM constructs 3D environmental models of natural landscapes while simultaneously estimating camera poses. However, in highly variable illumination and motion blur environments, long-distance tracking can result in large cumulative errors and scale shifts. To address this problem in actual applications, in this study, we propose a novel multithreaded RGB-D SLAM framework that incorporates a highly accurate prior terrestrial Light Detection and Ranging (LiDAR) point cloud, which can mitigate cumulative errors and improve the system’s robustness in large-scale and challenging scenarios. First, we employed deep learning to achieve system automatic initialization and motion recovery when tracking is lost. Next, we used terrestrial LiDAR point cloud to obtain prior data of the landscape, and then we applied the point-to-surface inductively coupled plasma (ICP) iterative algorithm to realize accurate camera pose control from the previously obtained LiDAR point cloud data, and finally expanded its control range in the local map construction. Furthermore, an innovative double window segment-based map optimization method is proposed to ensure consistency, better real-time performance, and high accuracy of map construction. The proposed method was tested for long-distance tracking and closed-loop in two different large indoor scenarios. The experimental results indicated that the standard deviation of the 3D map construction is 10 cm in a mapping distance of 100 m, compared with the LiDAR ground truth. Further, the relative cumulative error of the camera in closed-loop experiments is 0.09%, which is twice less than that of the typical SLAM algorithm (3.4%). Therefore, the proposed method was demonstrated to be more robust than the ORB-SLAM2 algorithm in complex indoor environments.


Author(s):  
Linlin Xia ◽  
Ruimin Liu ◽  
Daochang Zhang ◽  
Jingjing Zhang

Abstract Polarized skylight is as fundamental a constituent of passive navigation as geomagnetic field. In regards to its applicability to outdoor robot localization, a polarized light-aided VINS (abbreviates ‘visual-inertial navigation system’) modelization dedicated to globally optimized pose estimation and heading correction is constructed. The combined system follows typical visual SLAM (abbreviates ‘simultaneous localization and mapping’) frameworks, and we propose a methodology to fuse global heading measurements with visual and inertial information in a graph optimization based estimator. With ideas of ‘new-added attribute of each vertex and heading error encoded constraint edges’, the heading, as absolute orientation reference, is estimated by Berry polarization model and continuously updated in a graph structure. The formulized graph optimization process for multi-sensor fusion is simultaneously provided. In terms of campus road experiments on Bulldog-CX Robot platform, results are compared against purely stereo camera-dependent and VINS Fusion frameworks, revealing our design is substantially more accurate than others with both locally and globally consistent position and attitude estimates. As essentially passive, anatomically coupled and drifts calibratable navigation mode, the polarized light-aided VINS may therefore be considered as a tool candidate for a class of visual SLAM based multi-sensor fusion.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5419 ◽  
Author(s):  
Xiao Liu ◽  
Lei Zhang ◽  
Shengran Qin ◽  
Daji Tian ◽  
Shihan Ouyang ◽  
...  

Reducing the cumulative error in the process of simultaneous localization and mapping (SLAM) has always been a hot issue. In this paper, in order to improve the localization and mapping accuracy of ground vehicles, we proposed a novel optimized lidar odometry and mapping method using ground plane constraints and SegMatch-based loop detection. We only used the lidar point cloud to estimate the pose between consecutive frames, without any other sensors, such as Global Positioning System (GPS) and Inertial Measurement Unit (IMU). Firstly, the ground plane constraints were used to reduce matching errors. Then, based on more accurate lidar odometry obtained from lidar odometry and mapping (LOAM), SegMatch completed segmentation matching and loop detection to optimize the global pose. The neighborhood search was also used to accomplish the loop detection task in case of failure. Finally, the proposed method was evaluated and compared with the existing 3D lidar SLAM methods. Experiment results showed that the proposed method could realize low drift localization and dense 3D point cloud map construction.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 520
Author(s):  
Guanghui Xue ◽  
Jinbo Wei ◽  
Ruixue Li ◽  
Jian Cheng

Simultaneous localization and mapping (SLAM) is one of the key technologies for coal mine underground operation vehicles to build complex environment maps and positioning and to realize unmanned and autonomous operation. Many domestic and foreign scholars have studied many SLAM algorithms, but the mapping accuracy and real-time performance still need to be further improved. This paper presents a SLAM algorithm integrating scan context and Light weight and Ground-Optimized LiDAR Odometry and Mapping (LeGO-LOAM), LeGO-LOAM-SC. The algorithm uses the global descriptor extracted by scan context for loop detection, adds pose constraints to Georgia Tech Smoothing and Mapping (GTSAM) by Iterative Closest Points (ICP) for graph optimization, and constructs point cloud map and an output estimated pose of the mobile vehicle. The test with KITTI dataset 00 sequence data and the actual test in 2-storey underground parking lots are carried out. The results show that the proposed improved algorithm makes up for the drift of the point cloud map, has a higher mapping accuracy, a better real-time performance, a lower resource occupancy, a higher coincidence between trajectory estimation and real trajectory, smoother loop, and 6% reduction in CPU occupancy, the mean square errors of absolute trajectory error (ATE) and relative pose error (RPE) are reduced by 55.7% and 50.3% respectively; the translation and rotation accuracy are improved by about 5%, and the time consumption is reduced by 2~4%. Accurate map construction and low drift pose estimation can be performed.


Author(s):  
Jiahui Huang ◽  
Sheng Yang ◽  
Zishuo Zhao ◽  
Yu-Kun Lai ◽  
Shi-Min Hu

AbstractWe present a practical backend for stereo visual SLAM which can simultaneously discover individual rigid bodies and compute their motions in dynamic environments. While recent factor graph based state optimization algorithms have shown their ability to robustly solve SLAM problems by treating dynamic objects as outliers, their dynamic motions are rarely considered. In this paper, we exploit the consensus of 3D motions for landmarks extracted from the same rigid body for clustering, and to identify static and dynamic objects in a unified manner. Specifically, our algorithm builds a noise-aware motion affinity matrix from landmarks, and uses agglomerative clustering to distinguish rigid bodies. Using decoupled factor graph optimization to revise their shapes and trajectories, we obtain an iterative scheme to update both cluster assignments and motion estimation reciprocally. Evaluations on both synthetic scenes and KITTI demonstrate the capability of our approach, and further experiments considering online efficiency also show the effectiveness of our method for simultaneously tracking ego-motion and multiple objects.


2021 ◽  
Vol 11 (11) ◽  
pp. 4968
Author(s):  
Wentao Zhang ◽  
Guodong Zhai ◽  
Zhongwen Yue ◽  
Tao Pan ◽  
Ran Cheng

The autonomous positioning of tunneling equipment is the key to intellectualization and robotization of a tunneling face. In this paper, a method based on simultaneous localization and mapping (SLAM) to estimate the body pose of a roadheader and build a navigation map of a roadway is presented. In terms of pose estimation, an RGB-D camera is used to collect images, and a pose calculation model of a roadheader is established based on random sample consensus (RANSAC) and iterative closest point (ICP); constructing a pose graph optimization model with closed-loop constraints. An iterative equation based on Levenberg–Marquadt is derived-, which can achieve the optimal estimation of the body pose. In terms of mapping, LiDAR is used to experimentally construct the grid map based on open-source algorithms, such as Gmapping, Cartographer, Karto, and Hector. A point cloud map, octree map, and compound map are experimentally constructed based on the open-source library RTAB-MAP. By setting parameters, such as the expansion radius of an obstacle and the updating frequency of the map, a cost map for the navigation of a roadheader is established. Combined with algorithms, such as Dijskra and timed-elastic-band, simulation experiments show that the combination of octree map and cost map can support global path planning and local obstacle avoidance.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 178 ◽  
Author(s):  
Masashi Mishima ◽  
Hideaki Uchiyama ◽  
Diego Thomas ◽  
Rin-ichiro Taniguchi ◽  
Rafael Roberto ◽  
...  

This paper presents a framework of incremental 3D cuboid modeling by using the mapping results of an RGB-D camera based simultaneous localization and mapping (SLAM) system. This framework is useful in accurately creating cuboid CAD models from a point cloud in an online manner. While performing the RGB-D SLAM, planes are incrementally reconstructed from a point cloud in each frame to create a plane map. Then, cuboids are detected in the plane map by analyzing the positional relationships between the planes, such as orthogonality, convexity, and proximity. Finally, the position, pose, and size of a cuboid are determined by computing the intersection of three perpendicular planes. To suppress the false detection of the cuboids, the cuboid shapes are incrementally updated with sequential measurements to check the uncertainty of the cuboids. In addition, the drift error of the SLAM is compensated by the registration of the cuboids. As an application of our framework, an augmented reality-based interactive cuboid modeling system was developed. In the evaluation at cluttered environments, the precision and recall of the cuboid detection were investigated, compared with a batch-based cuboid detection method, so that the advantages of our proposed method were clarified.


2017 ◽  
Vol 13 (8) ◽  
pp. 155014771772671
Author(s):  
Jiuqing Wan ◽  
Shaocong Bu ◽  
Jinsong Yu ◽  
Liping Zhong

This article proposes a hybrid dynamic belief propagation for simultaneous localization and mapping in the mobile robot network. The positions of landmarks and the poses of moving robots at each time slot are estimated simultaneously in an online and distributed manner, by fusing the odometry data of each robot and the measurements of robot–robot or robot–landmark relative distance and angle. The joint belief state of all robots and landmarks is encoded by a factor graph and the marginal posterior probability distribution of each variable is inferred by belief propagation. We show how to calculate, broadcast, and update messages between neighboring nodes in the factor graph. Specifically, we combine parametric and nonparametric techniques to tackle the problem arisen from non-Gaussian distributions and nonlinear models. Simulation and experimental results on publicly available dataset show the validity of our algorithm.


Sign in / Sign up

Export Citation Format

Share Document