scholarly journals TOWARDS GLOBALLY CONSISTENT SCAN MATCHINGWITH GROUND TRUTH INTEGRATION

Author(s):  
J. Gailis ◽  
A. Nüchter

The scan matching based simultaneous localization and mapping method with six dimensional poses is capable of creating a three dimensional point cloud map of the environment, as well as estimating the six dimensional path that the vehicle has travelled. The essence of it is the registering and matching of sequentially acquired 3D laser scans, while moving along a path, in a common coordinate frame in order to provide 6D pose estimations at the respective positions, as well as create a three dimensional map of the environment. An approach that could drastically improve the reliability of acquired data is to integrate available ground truth information. This paper is about implementing such functionality as a contribution to 6D SLAM (simultaneous localization and mapping with 6 DoF) in the 3DTK – The 3D Toolkit software (Nüchter and Lingemann, 2011), as well as test the functionality of the implementation using real world datasets.

Author(s):  
P. Shokrzadeh

Abstract. 3D representation of the environment is a piece of vital information for most of the engineering sciences. However, providing such information in classical surveying approaches demands a considerable amount of time for localizing the sensor in a desired coordinate frame to map the environment. Simultaneous Localization And Mapping (SLAM) algorithm is capable of localizing the sensor and do the mapping while the sensor is moving through the environment. In this paper, SLAM will be applied on the data of a lightweight 3D laser scanner in which we call semi-sparse point cloud, because of the unique specifications of the point cloud which comes from various resolutions in vertical and horizontal directions. In contrast to most of the SLAM algorithms, there is no aiding sensor to provide prior information of motion. The output of the algorithm would be a high-density full geometry detailed map in a short time. The accuracy of the algorithm has been estimated in a medium scale simulated outdoor environments in Gazebo and Robot Operating System (ROS). Considering Velodyne Puck accuracy which is 3 cm, the map was generated with approximately 6 cm accuracy.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 520
Author(s):  
Guanghui Xue ◽  
Jinbo Wei ◽  
Ruixue Li ◽  
Jian Cheng

Simultaneous localization and mapping (SLAM) is one of the key technologies for coal mine underground operation vehicles to build complex environment maps and positioning and to realize unmanned and autonomous operation. Many domestic and foreign scholars have studied many SLAM algorithms, but the mapping accuracy and real-time performance still need to be further improved. This paper presents a SLAM algorithm integrating scan context and Light weight and Ground-Optimized LiDAR Odometry and Mapping (LeGO-LOAM), LeGO-LOAM-SC. The algorithm uses the global descriptor extracted by scan context for loop detection, adds pose constraints to Georgia Tech Smoothing and Mapping (GTSAM) by Iterative Closest Points (ICP) for graph optimization, and constructs point cloud map and an output estimated pose of the mobile vehicle. The test with KITTI dataset 00 sequence data and the actual test in 2-storey underground parking lots are carried out. The results show that the proposed improved algorithm makes up for the drift of the point cloud map, has a higher mapping accuracy, a better real-time performance, a lower resource occupancy, a higher coincidence between trajectory estimation and real trajectory, smoother loop, and 6% reduction in CPU occupancy, the mean square errors of absolute trajectory error (ATE) and relative pose error (RPE) are reduced by 55.7% and 50.3% respectively; the translation and rotation accuracy are improved by about 5%, and the time consumption is reduced by 2~4%. Accurate map construction and low drift pose estimation can be performed.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5570
Author(s):  
Yiming Ding ◽  
Zhi Xiong ◽  
Wanling Li ◽  
Zhiguo Cao ◽  
Zhengchun Wang

The combination of biomechanics and inertial pedestrian navigation research provides a very promising approach for pedestrian positioning in environments where Global Positioning System (GPS) signal is unavailable. However, in practical applications such as fire rescue and indoor security, the inertial sensor-based pedestrian navigation system is facing various challenges, especially the step length estimation errors and heading drift in running and sprint. In this paper, a trinal-node, including two thigh-worn inertial measurement units (IMU) and one waist-worn IMU, based simultaneous localization and occupation grid mapping method is proposed. Specifically, the gait detection and segmentation are realized by the zero-crossing detection of the difference of thighs pitch angle. A piecewise function between the step length and the probability distribution of waist horizontal acceleration is established to achieve accurate step length estimation both in regular walking and drastic motions. In addition, the simultaneous localization and mapping method based on occupancy grids, which involves the historic trajectory to improve the pedestrian’s pose estimation is introduced. The experiments show that the proposed trinal-node pedestrian inertial odometer can identify and segment each gait cycle in the walking, running, and sprint. The average step length estimation error is no more than 3.58% of the total travel distance in the motion speed from 1.23 m/s to 3.92 m/s. In combination with the proposed simultaneous localization and mapping method based on the occupancy grid, the localization error is less than 5 m in a single-story building of 2643.2 m2.


2017 ◽  
Vol 36 (12) ◽  
pp. 1363-1386 ◽  
Author(s):  
Patrick McGarey ◽  
Kirk MacTavish ◽  
François Pomerleau ◽  
Timothy D Barfoot

Tethered mobile robots are useful for exploration in steep, rugged, and dangerous terrain. A tether can provide a robot with robust communications, power, and mechanical support, but also constrains motion. In cluttered environments, the tether will wrap around a number of intermediate ‘anchor points’, complicating navigation. We show that by measuring the length of tether deployed and the bearing to the most recent anchor point, we can formulate a tethered simultaneous localization and mapping (TSLAM) problem that allows us to estimate the pose of the robot and the positions of the anchor points, using only low-cost, nonvisual sensors. This information is used by the robot to safely return along an outgoing trajectory while avoiding tether entanglement. We are motivated by TSLAM as a building block to aid conventional, camera, and laser-based approaches to simultaneous localization and mapping (SLAM), which tend to fail in dark and or dusty environments. Unlike conventional range-bearing SLAM, the TSLAM problem must account for the fact that the tether-length measurements are a function of the robot’s pose and all the intermediate anchor-point positions. While this fact has implications on the sparsity that can be exploited in our method, we show that a solution to the TSLAM problem can still be found and formulate two approaches: (i) an online particle filter based on FastSLAM and (ii) an efficient, offline batch solution. We demonstrate that either method outperforms odometry alone, both in simulation and in experiments using our TReX (Tethered Robotic eXplorer) mobile robot operating in flat-indoor and steep-outdoor environments. For the indoor experiment, we compare each method using the same dataset with ground truth, showing that batch TSLAM outperforms particle-filter TSLAM in localization and mapping accuracy, owing to superior anchor-point detection, data association, and outlier rejection.


2013 ◽  
Vol 864-867 ◽  
pp. 2792-2798
Author(s):  
Yun Cai ◽  
Hao Li ◽  
Ming Fei Wu ◽  
Biao Yang

In view of the lack of high-precision and high-resolution texture information of three-dimensional laser point cloud, this paper proposes a fusion method of three-dimensional laser point cloud and high-resolution digital images. Firstly the point cloud is transformed to the spherical image of laser intensity by projection. Then the rigorous projection transformation relation between the external digital image and three-dimensional point cloud is constructed easily through spherical image and feature information. Finally the color information of the point cloud is given by digital images. The experiment indicates that the novel method is able to avoid the layout difficulty of control points in the field and operating the three-dimensional data of indoor work, and meanwhile achieve the control of geometric accuracy.


2021 ◽  
Vol 33 (8) ◽  
pp. 2591
Author(s):  
Chaoyang Chen ◽  
Qi He ◽  
Qiubo Ye ◽  
Guangsong Yang ◽  
Cheng-Fu Yang

Sign in / Sign up

Export Citation Format

Share Document