scholarly journals A Closed Loop Detection Method for Lidar Simultaneous Localization and Mapping with Light Calibration Information

2020 ◽  
Vol 32 (7) ◽  
pp. 2289
Author(s):  
Qiang Wang ◽  
Yong Zeng ◽  
Yin-Kang Zou ◽  
Quan-Zhen Li
Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7612
Author(s):  
Quande Yuan ◽  
Zhenming Zhang ◽  
Yuzhen Pi ◽  
Lei Kou ◽  
Fangfang Zhang

As visual simultaneous localization and mapping (vSLAM) is easy disturbed by the changes of camera viewpoint and scene appearance when building a globally consistent map, the robustness and real-time performance of key frame image selections cannot meet the requirements. To solve this problem, a real-time closed-loop detection method based on a dynamic Siamese networks is proposed in this paper. First, a dynamic Siamese network-based fast conversion learning model is constructed to handle the impact of external changes on key frame judgments, and an elementwise convergence strategy is adopted to ensure the accurate positioning of key frames in the closed-loop judgment process. Second, a joint training strategy is designed to ensure the model parameters can be learned offline in parallel from tagged video sequences, which can effectively improve the speed of closed-loop detection. Finally, the proposed method is applied experimentally to three typical closed-loop detection scenario datasets and the experimental results demonstrate the effectiveness and robustness of the proposed method under the interference of complex scenes.


2014 ◽  
Vol 32 (5) ◽  
pp. 1006-1013 ◽  
Author(s):  
Hui Li ◽  
Liyang Cui ◽  
Zhili Lin ◽  
Lijing Li ◽  
Chunxi Zhang

2019 ◽  
Vol 9 (16) ◽  
pp. 3264 ◽  
Author(s):  
Xujie Kang ◽  
Jing Li ◽  
Xiangtao Fan ◽  
Wenhui Wan

In recent years, low-cost and lightweight RGB and depth (RGB-D) sensors, such as Microsoft Kinect, have made available rich image and depth data, making them very popular in the field of simultaneous localization and mapping (SLAM), which has been increasingly used in robotics, self-driving vehicles, and augmented reality. The RGB-D SLAM constructs 3D environmental models of natural landscapes while simultaneously estimating camera poses. However, in highly variable illumination and motion blur environments, long-distance tracking can result in large cumulative errors and scale shifts. To address this problem in actual applications, in this study, we propose a novel multithreaded RGB-D SLAM framework that incorporates a highly accurate prior terrestrial Light Detection and Ranging (LiDAR) point cloud, which can mitigate cumulative errors and improve the system’s robustness in large-scale and challenging scenarios. First, we employed deep learning to achieve system automatic initialization and motion recovery when tracking is lost. Next, we used terrestrial LiDAR point cloud to obtain prior data of the landscape, and then we applied the point-to-surface inductively coupled plasma (ICP) iterative algorithm to realize accurate camera pose control from the previously obtained LiDAR point cloud data, and finally expanded its control range in the local map construction. Furthermore, an innovative double window segment-based map optimization method is proposed to ensure consistency, better real-time performance, and high accuracy of map construction. The proposed method was tested for long-distance tracking and closed-loop in two different large indoor scenarios. The experimental results indicated that the standard deviation of the 3D map construction is 10 cm in a mapping distance of 100 m, compared with the LiDAR ground truth. Further, the relative cumulative error of the camera in closed-loop experiments is 0.09%, which is twice less than that of the typical SLAM algorithm (3.4%). Therefore, the proposed method was demonstrated to be more robust than the ORB-SLAM2 algorithm in complex indoor environments.


2018 ◽  
Vol 8 (12) ◽  
pp. 2534 ◽  
Author(s):  
Zhongli Wang ◽  
Yan Chen ◽  
Yue Mei ◽  
Kuo Yang ◽  
Baigen Cai

Generally, the key issues of 2D LiDAR-based simultaneous localization and mapping (SLAM) for indoor application include data association (DA) and closed-loop detection. Particularly, a low-texture environment, which refers to no obvious changes between two consecutive scanning outputs, with moving objects existing in the environment will bring great challenges on DA and the closed-loop detection, and the accuracy and consistency of SLAM may be badly affected. There is not much literature that addresses this issue. In this paper, a mapping strategy is firstly exploited to improve the performance of the 2D SLAM in dynamic environments. Secondly, a fusion method which combines the IMU sensor with a 2D LiDAR, based on framework of extended Kalman Filter (EKF), is proposed to enhance the performance under low-texture environments. In the front-end of the proposed SLAM method, initial motion estimation is obtained from the output of EKF, and it can be taken as the initial pose for the scan matching problem. Then the scan matching problem can be optimized by the Levenberg–Marquardt (LM) algorithm. For the back-end optimization, a sparse pose adjustment (SPA) method is employed. To improve the accuracy, the grid map is updated with the bicubic interpolation method for derivative computing. With the improvements both in the DA process and the back-end optimization stage, the accuracy and consistency of SLAM results in low-texture environments is enhanced. Qualitative and quantitative experiments with open-loop and closed-loop cases have been conducted and the results are analyzed, confirming that the proposed method is effective in low-texture and dynamic indoor environments.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Shuhuan Wen ◽  
Kamal Mohammed Othman ◽  
Ahmad B. Rad ◽  
Yixuan Zhang ◽  
Yongsheng Zhao

We present a SLAM with closed-loop controller method for navigation of NAO humanoid robot from Aldebaran. The method is based on the integration of laser and vision system. The camera is used to recognize the landmarks whereas the laser provides the information for simultaneous localization and mapping (SLAM ). K-means clustering method is implemented to extract data from different objects. In addition, the robot avoids the obstacles by the avoidance function. The closed-loop controller reduces the error between the real position and estimated position. Finally, simulation and experiments show that the proposed method is efficient and reliable for navigation in indoor environments.


2019 ◽  
Vol 2 (1) ◽  
pp. 1900107
Author(s):  
Yang Li ◽  
Chao Ping Chen ◽  
Nizamuddin Maitlo ◽  
Lantian Mi ◽  
Wenbo Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document