Rao-Blackwellized visual SLAM for small UAVs with vehicle model partition

Author(s):  
Tianmiao Wang ◽  
Chaolei Wang ◽  
Jianhong Liang ◽  
Yicheng Zhang

Purpose – The purpose of this paper is to present a Rao–Blackwellized particle filter (RBPF) approach for the visual simultaneous localization and mapping (SLAM) of small unmanned aerial vehicles (UAVs). Design/methodology/approach – Measurements from inertial measurement unit, barometric altimeter and monocular camera are fused to estimate the state of the vehicle while building a feature map. In this SLAM framework, an extra factorization method is proposed to partition the vehicle model into subspaces as the internal and external states. The internal state is estimated by an extended Kalman filter (EKF). A particle filter is employed for the external state estimation and parallel EKFs are for the map management. Findings – Simulation results indicate that the proposed approach is more stable and accurate than other existing marginalized particle filter-based SLAM algorithms. Experiments are also carried out to verify the effectiveness of this SLAM method by comparing with a referential global positioning system/inertial navigation system. Originality/value – The main contribution of this paper is the theoretical derivation and experimental application of the Rao–Blackwellized visual SLAM algorithm with vehicle model partition for small UAVs.

Author(s):  
Chang Chen ◽  
Hua Zhu

Purpose This study aims to present a visual-inertial simultaneous localization and mapping (SLAM) method for accurate positioning and navigation of mobile robots in the event of global positioning system (GPS) signal failure in buildings, trees and other obstacles. Design/methodology/approach In this framework, a feature extraction method distributes features on the image under texture-less scenes. The assumption of constant luminosity is improved, and the features are tracked by the optical flow to enhance the stability of the system. The camera data and inertial measurement unit data are tightly coupled to estimate the pose by nonlinear optimization. Findings The method is successfully performed on the mobile robot and steadily extracts the features on low texture environments and tracks features. The end-to-end error is 1.375 m with respect to the total length of 762 m. The authors achieve better relative pose error, scale and CPU load than ORB-SLAM2 on EuRoC data sets. Originality/value The main contribution of this study is the theoretical derivation and experimental application of a new visual-inertial SLAM method that has excellent accuracy and stability on weak texture scenes.


Author(s):  
Hui Xiong ◽  
Youping Chen ◽  
Xiaoping Li ◽  
Bing Chen ◽  
Jun Zhang

Purpose The purpose of this paper is to present a scan matching simultaneous localization and mapping (SLAM) algorithm based on particle filter to generate the grid map online. It mainly focuses on reducing the memory consumption and alleviating the loop closure problem. Design/methodology/approach The proposed method alleviates the loop closure problem by improving the accuracy of the robot’s pose. First, two improvements were applied to enhance the accuracy of the hill climbing scan matching. Second, a particle filter was used to maintain the diversity of the robot’s pose and then to supply potential seeds to the hill climbing scan matching to ensure that the best match point was the global optimum. The proposed method reduces the memory consumption by maintaining only a single grid map. Findings Simulation and experimental results have proved that this method can build a consistent map of a complex environment. Meanwhile, it reduced the memory consumption and alleviates the loop closure problem. Originality/value In this paper, a new SLAM algorithm has been proposed. It can reduce the memory consumption and alleviate the loop closure problem without lowering the accuracy of the generated grid map.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zhen Zhou ◽  
Dongqing Wang ◽  
Boyang Xu

Purpose The purpose of this paper is to explore a multi-innovation with forgetting factor-based EKF-SLAM (FMI-EKF-SLAM) algorithm to solve the error increasing problem, caused by the Extended Kalman filtering (EKF) violating the local linear assumption in simultaneous localization and mapping (SLAM) for mobile robots because of strong nonlinearity. Design/methodology/approach A multi-innovation with forgetting factor-based EKF-SLAM (FMI-EKF-SLAM) algorithm is investigated. At each filtering step, the FMI-EKF-SLAM algorithm expands the single innovation at current step to an extended multi-innovation containing current and previous steps and introduces the forgetting factor to reduce the effect of old innovations. Findings The simulation results show that the explored FMI-EKF-SLAM method reduces the state estimation errors, obtains the ideal filtering effect and achieves higher accuracy in positioning and mapping. Originality/value The method proposed in this paper improves the positioning accuracy of SLAM and improves the EKF, so that the EKF has higher accuracy and wider application range.


2014 ◽  
Vol 590 ◽  
pp. 677-682 ◽  
Author(s):  
Guo Liang Zhang ◽  
Er Liang Yao ◽  
Wen Jun Tang ◽  
Jun Xu

Because of the challenging data association in similar environments, a large number of particles are needed to improve the precision in particle filtering SLAM (simultaneous localization and mapping).An improved particle filter SLAM algorithm based on particle swarm optimization in similar environments is proposed. A multimode proposal distribution is acquired by combining the information of the odometry and the laser scanning. Particles are concentrated to the region of each posterior probability distribution maximum value by PSO. The performance of the conventional particle filter SLAM is improved. The simulation experiment results prove its effectiveness and feasibility.


2018 ◽  
Vol 2 (3) ◽  
pp. 151 ◽  
Author(s):  
Fethi Denim ◽  
Abdelkrim Nemra ◽  
Kahina Louadj ◽  
Abdelghani Boucheloukh ◽  
Mustapha Hamerlain ◽  
...  

Simultaneous localization and mapping (SLAM) is an essential capability for Unmanned Ground Vehicles (UGVs) travelling in unknown environments where globally accurate position data as GPS is not available. It is an important topic in the autonomous mobile robot research. This paper presents an Adaptive De-centralized Cooperative Vision-based SLAM solution for multiple UGVs, using the Adaptive Covariance Intersection (ACI) supported by a stereo vision sensor. In recent years, SLAM problem has gotten a specific consideration, the most commonly used approaches are the EKF-SLAM algorithm and the FAST-SLAM algorithm. The primary, which requires an accurate process and an observation model, suffers from the linearization problem. The last mentioned is not suitable for real-time implementation. In our work, the Visual SLAM (VSLAM) problem could be solved based on the Smooth Variable Structure Filter (SVSF) is proposed. This new filter is robust and stable to modelling uncertainties making it suitable for UGV localization and mapping problem. This new strategy retains the near optimal performance of the SVSF when applied to an uncertain system, it has the added benefit of presenting a considerable improvement in the robustness of the estimation process. All UGVs will add data features sorted by the ACI that estimate position on the global map. This solution gives, as a result, a large reliable map constructed by a group of UGVs plotted on it. This paper presents a Cooperative SVSF-VSLAM algorithm that contributes to solve the Adaptive Cooperative Vision SLAM problem for multiple UGVs. The algorithm was implemented on three mobile robots Pioneer 3-AT, using stereo vision sensors. Simulation results show eciency and give an advantage to our proposed algorithm, compared to the Cooperative EKF-VSLAM algorithm mainly concerning the noise quality.  This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Tianji Ma ◽  
Nanyang Bai ◽  
Wentao Shi ◽  
Xi Wu ◽  
Lutao Wang ◽  
...  

In the automatic navigation robot field, robotic autonomous positioning is one of the most difficult challenges. Simultaneous localization and mapping (SLAM) technology can incrementally construct a map of the robot’s moving path in an unknown environment while estimating the position of the robot in the map, providing an effective solution for robots to fully navigate autonomously. The camera can obtain corresponding two-dimensional digital images from the real three-dimensional world. These images contain very rich colour, texture information, and highly recognizable features, which provide indispensable information for robots to understand and recognize the environment based on the ability to autonomously explore the unknown environment. Therefore, more and more researchers use cameras to solve SLAM problems, also known as visual SLAM. Visual SLAM needs to process a large number of image data collected by the camera, which has high performance requirements for computing hardware, and thus, its application on embedded mobile platforms is greatly limited. This paper presents a parallelization method based on embedded hardware equipped with embedded GPU. Use CUDA, a parallel computing platform, to accelerate the visual front-end processing of the visual SLAM algorithm. Extensive experiments are done to verify the effectiveness of the method. The results show that the presented method effectively improves the operating efficiency of the visual SLAM algorithm and ensures the original accuracy of the algorithm.


2017 ◽  
Vol 34 (4) ◽  
pp. 1217-1239 ◽  
Author(s):  
Chen-Chien Hsu ◽  
Cheng-Kai Yang ◽  
Yi-Hsing Chien ◽  
Yin-Tien Wang ◽  
Wei-Yen Wang ◽  
...  

Purpose FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms. Design/methodology/approach As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates. Findings Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy. Originality/value The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.


Author(s):  
Hui Xiong ◽  
Youping Chen ◽  
Xiaoping Li ◽  
Bing Chen

PurposeBecause submaps including a subset of the global map contain more environmental information, submap-based graph simultaneous localization and mapping (SLAM) has been studied by many researchers. In most of those studies, helpful environmental information was not taken into consideration when designed the termination criterion of the submap construction process. After optimizing the graph, cumulative error within the submaps was also ignored. To address those problems, this paper aims to propose a two-level optimized graph-based SLAM algorithm.Design/methodology/approachSubmaps are updated by extended Kalman filter SLAM while no geometric-shaped landmark models are needed; raw laser scans are treated as landmarks. A more reasonable criterion called the uncertainty index is proposed to combine with the size of the submap to terminate the submap construction process. After a submap is completed and a loop closure is found, a two-level optimization process is performed to minimize the loop closure error and the accumulated error within the submaps.FindingsSimulation and experimental results indicate that the estimated error of the proposed algorithm is small, and the maps generated are consistent whether in global or local.Practical implicationsThe proposed method is robust to sparse pedestrians and can be adapted to most indoor environments.Originality/valueIn this paper, a two-level optimized graph-based SLAM algorithm is proposed.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3604 ◽  
Author(s):  
Peixin Liu ◽  
Xianfeng Yuan ◽  
Chengjin Zhang ◽  
Yong Song ◽  
Chuanzheng Liu ◽  
...  

To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation of the camera. Secondly, the Shi–Tomasi corners of the input sequence were tracked, and optimization equations were established using the pixel tracking of sparse direct visual odometry (VO). Thirdly, the Levenberg–Marquardt (L–M) method was applied to solve the joint optimization equation, and the photometric calibration parameters in the VO were updated to realize the real-time dynamic compensation of the exposure of the input sequences, which reduced the effects of the light variations on SLAM’s (simultaneous localization and mapping) accuracy and robustness. Finally, a Shi–Tomasi corner filtered strategy was designed to reduce the computational complexity of the proposed algorithm, and the loop closure detection was realized based on the oriented FAST and rotated BRIEF (ORB) features. The proposed algorithm was tested using TUM, KITTI, EuRoC, and an actual environment, and the experimental results show that the positioning and mapping performance of the proposed algorithm is promising.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 339
Author(s):  
Kai Wang ◽  
Jun Zhou ◽  
Wenhai Zhang ◽  
Baohua Zhang

To meet the demand for canopy morphological parameter measurements in orchards, a mobile scanning system is designed based on the 3D Simultaneous Localization and Mapping (SLAM) algorithm. The system uses a lightweight LiDAR-Inertial Measurement Unit (LiDAR-IMU) state estimator and a rotation-constrained optimization algorithm to reconstruct a point cloud map of the orchard. Then, Statistical Outlier Removal (SOR) filtering and European clustering algorithms are used to segment the orchard point cloud from which the ground information has been separated, and the k-nearest neighbour (KNN) search algorithm is used to restore the filtered point cloud. Finally, the height of the fruit trees and the volume of the canopy are obtained by the point cloud statistical method and the 3D alpha-shape algorithm. To verify the algorithm, tracked robots equipped with LIDAR and an IMU are used in a standardized orchard. Experiments show that the system in this paper can reconstruct the orchard point cloud environment with high accuracy and can obtain the point cloud information of all fruit trees in the orchard environment. The accuracy of point cloud-based segmentation of fruit trees in the orchard is 95.4%. The R2 and Root Mean Square Error (RMSE) values of crown height are 0.93682 and 0.04337, respectively, and the corresponding values of canopy volume are 0.8406 and 1.5738, respectively. In summary, this system achieves a good evaluation result of orchard crown information and has important application value in the intelligent measurement of fruit trees.


Sign in / Sign up

Export Citation Format

Share Document