scholarly journals A Novel RGB-D SLAM Algorithm Based on Cloud Robotics

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5288 ◽  
Author(s):  
Yanli Liu ◽  
Heng Zhang ◽  
Chao Huang

In this paper, we present a novel red-green-blue-depth simultaneous localization and mapping (RGB-D SLAM) algorithm based on cloud robotics, which combines RGB-D SLAM with the cloud robot and offloads the back-end process of the RGB-D SLAM algorithm to the cloud. This paper analyzes the front and back parts of the original RGB-D SLAM algorithm and improves the algorithm from three aspects: feature extraction, point cloud registration, and pose optimization. Experiments show the superiority of the improved algorithm. In addition, taking advantage of the cloud robotics, the RGB-D SLAM algorithm is combined with the cloud robot and the back-end part of the computationally intensive algorithm is offloaded to the cloud. Experimental validation is provided, which compares the cloud robotic-based RGB-D SLAM algorithm with the local RGB-D SLAM algorithm. The results of the experiments demonstrate the superiority of our framework. The combination of cloud robotics and RGB-D SLAM can not only improve the efficiency of SLAM but also reduce the robot’s price and size.

Author(s):  
P. Shokrzadeh

Abstract. 3D representation of the environment is a piece of vital information for most of the engineering sciences. However, providing such information in classical surveying approaches demands a considerable amount of time for localizing the sensor in a desired coordinate frame to map the environment. Simultaneous Localization And Mapping (SLAM) algorithm is capable of localizing the sensor and do the mapping while the sensor is moving through the environment. In this paper, SLAM will be applied on the data of a lightweight 3D laser scanner in which we call semi-sparse point cloud, because of the unique specifications of the point cloud which comes from various resolutions in vertical and horizontal directions. In contrast to most of the SLAM algorithms, there is no aiding sensor to provide prior information of motion. The output of the algorithm would be a high-density full geometry detailed map in a short time. The accuracy of the algorithm has been estimated in a medium scale simulated outdoor environments in Gazebo and Robot Operating System (ROS). Considering Velodyne Puck accuracy which is 3 cm, the map was generated with approximately 6 cm accuracy.


Author(s):  
Mahdi Saleh ◽  
Shervin Dehghani ◽  
Benjamin Busam ◽  
Nassir Navab ◽  
Federico Tombari

2018 ◽  
Vol 28 (3) ◽  
pp. 505-519
Author(s):  
Demeng Li ◽  
Jihong Zhua ◽  
Benlian Xu ◽  
Mingli Lu ◽  
Mingyue Li

Abstract Inspired by ant foraging, as well as modeling of the feature map and measurements as random finite sets, a novel formulation in an ant colony framework is proposed to jointly estimate the map and the vehicle trajectory so as to solve a feature-based simultaneous localization and mapping (SLAM) problem. This so-called ant-PHD-SLAM algorithm allows decomposing the recursion for the joint map-trajectory posterior density into a jointly propagated posterior density of the vehicle trajectory and the posterior density of the feature map conditioned on the vehicle trajectory. More specifically, an ant-PHD filter is proposed to jointly estimate the number of map features and their locations, namely, using the powerful search ability and collective cooperation of ants to complete the PHD-SLAM filter time prediction and data update process. Meanwhile, a novel fast moving ant estimator (F-MAE) is utilized to estimate the maneuvering vehicle trajectory. Evaluation and comparison using several numerical examples show a performance improvement over recently reported approaches. Moreover, the experimental results based on the robot operation system (ROS) platform validate the consistency with the results obtained from numerical simulations.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5778
Author(s):  
Baifan Chen ◽  
Hong Chen ◽  
Baojun Song ◽  
Grace Gong

Three-dimensional point cloud registration (PCReg) has a wide range of applications in computer vision, 3D reconstruction and medical fields. Although numerous advances have been achieved in the field of point cloud registration in recent years, large-scale rigid transformation is a problem that most algorithms still cannot effectively handle. To solve this problem, we propose a point cloud registration method based on learning and transform-invariant features (TIF-Reg). Our algorithm includes four modules, which are the transform-invariant feature extraction module, deep feature embedding module, corresponding point generation module and decoupled singular value decomposition (SVD) module. In the transform-invariant feature extraction module, we design TIF in SE(3) (which means the 3D rigid transformation space) which contains a triangular feature and local density feature for points. It fully exploits the transformation invariance of point clouds, making the algorithm highly robust to rigid transformation. The deep feature embedding module embeds TIF into a high-dimension space using a deep neural network, further improving the expression ability of features. The corresponding point cloud is generated using an attention mechanism in the corresponding point generation module, and the final transformation for registration is calculated in the decoupled SVD module. In an experiment, we first train and evaluate the TIF-Reg method on the ModelNet40 dataset. The results show that our method keeps the root mean squared error (RMSE) of rotation within 0.5∘ and the RMSE of translation error close to 0 m, even when the rotation is up to [−180∘, 180∘] or the translation is up to [−20 m, 20 m]. We also test the generalization of our method on the TUM3D dataset using the model trained on Modelnet40. The results show that our method’s errors are close to the experimental results on Modelnet40, which verifies the good generalization ability of our method. All experiments prove that the proposed method is superior to state-of-the-art PCReg algorithms in terms of accuracy and complexity.


Author(s):  
N. Botteghi ◽  
B. Sirmacek ◽  
R. Schulte ◽  
M. Poel ◽  
C. Brune

Abstract. In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments.


2021 ◽  
Vol 12 (1) ◽  
pp. 49
Author(s):  
Abira Kanwal ◽  
Zunaira Anjum ◽  
Wasif Muhammad

A simultaneous localization and mapping (SLAM) algorithm allows a mobile robot or a driverless car to determine its location in an unknown and dynamic environment where it is placed, and simultaneously allows it to build a consistent map of that environment. Driverless cars are becoming an emerging reality from science fiction, but there is still too much required for the development of technological breakthroughs for their control, guidance, safety, and health related issues. One existing problem which is required to be addressed is SLAM of driverless car in GPS denied-areas, i.e., congested urban areas with large buildings where GPS signals are weak as a result of congested infrastructure. Due to poor reception of GPS signals in these areas, there is an immense need to localize and route driverless car using onboard sensory modalities, e.g., LIDAR, RADAR, etc., without being dependent on GPS information for its navigation and control. The driverless car SLAM using LIDAR and RADAR involves costly sensors, which appears to be a limitation of this approach. To overcome these limitations, in this article we propose a visual information-based SLAM (vSLAM) algorithm for GPS-denied areas using a cheap video camera. As a front-end process, features-based monocular visual odometry (VO) on grayscale input image frames is performed. Random Sample Consensus (RANSAC) refinement and global pose estimation is performed as a back-end process. The results obtained from the proposed approach demonstrate 95% accuracy with a maximum mean error of 4.98.


2013 ◽  
Vol 765-767 ◽  
pp. 1932-1935
Author(s):  
Zeng Xiang Yang ◽  
Sai Jin

To decrease the uncertainty of simultaneous localization and mapping of UAV, and at the same time, to increase the speed of searching the unknown environment at which UAV locates, an active SLAM trajectory programming algorithm is proposed based on optimal control. Therefore, UAV SLAM is tackled as a combined optimization problem, considering the precision of UAV location and mapping integrity. Based on the simplified UAV plane motion model, this algorithm is simulated and tested by comparing with the random SLAM algorithm. Simulation results show that this algorithm is effective.


2019 ◽  
Vol 9 (16) ◽  
pp. 3264 ◽  
Author(s):  
Xujie Kang ◽  
Jing Li ◽  
Xiangtao Fan ◽  
Wenhui Wan

In recent years, low-cost and lightweight RGB and depth (RGB-D) sensors, such as Microsoft Kinect, have made available rich image and depth data, making them very popular in the field of simultaneous localization and mapping (SLAM), which has been increasingly used in robotics, self-driving vehicles, and augmented reality. The RGB-D SLAM constructs 3D environmental models of natural landscapes while simultaneously estimating camera poses. However, in highly variable illumination and motion blur environments, long-distance tracking can result in large cumulative errors and scale shifts. To address this problem in actual applications, in this study, we propose a novel multithreaded RGB-D SLAM framework that incorporates a highly accurate prior terrestrial Light Detection and Ranging (LiDAR) point cloud, which can mitigate cumulative errors and improve the system’s robustness in large-scale and challenging scenarios. First, we employed deep learning to achieve system automatic initialization and motion recovery when tracking is lost. Next, we used terrestrial LiDAR point cloud to obtain prior data of the landscape, and then we applied the point-to-surface inductively coupled plasma (ICP) iterative algorithm to realize accurate camera pose control from the previously obtained LiDAR point cloud data, and finally expanded its control range in the local map construction. Furthermore, an innovative double window segment-based map optimization method is proposed to ensure consistency, better real-time performance, and high accuracy of map construction. The proposed method was tested for long-distance tracking and closed-loop in two different large indoor scenarios. The experimental results indicated that the standard deviation of the 3D map construction is 10 cm in a mapping distance of 100 m, compared with the LiDAR ground truth. Further, the relative cumulative error of the camera in closed-loop experiments is 0.09%, which is twice less than that of the typical SLAM algorithm (3.4%). Therefore, the proposed method was demonstrated to be more robust than the ORB-SLAM2 algorithm in complex indoor environments.


Sign in / Sign up

Export Citation Format

Share Document