scan matching
Recently Published Documents


TOTAL DOCUMENTS

318
(FIVE YEARS 66)

H-INDEX

27
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Ahmad Alsayed ◽  
Mostafa R. Nabawy ◽  
Akilu Yunusa-Kaltungo ◽  
Mark K. Quinn ◽  
Farshad Arvin
Keyword(s):  

2021 ◽  
Author(s):  
Tingfeng Ye ◽  
Juzhong Zhang ◽  
Yingcai Wan ◽  
Ze Cui ◽  
Hongbo Yang

In this paper, we extend RGB-D SLAM to address the problem that sparse map-building RGB-D SLAM cannot directly generate maps for indoor navigation and propose a SLAM system for fast generation of indoor planar maps. The system uses RGBD images to generate positional information while converting the corresponding RGBD images into 2D planar lasers for 2D grid navigation map reconstruction of indoor scenes under the condition of limited computational resources, solving the problem that the sparse point cloud maps generated by RGB-D SLAM cannot be directly used for navigation. Meanwhile, the pose information provided by RGB-D SLAM and scan matching respectively is fused to obtain a more accurate and robust pose, which improves the accuracy of map building. Furthermore, we demonstrate the function of the proposed system on the ICL indoor dataset and evaluate the performance of different RGB-D SLAM. The method proposed in this paper can be generalized to RGB-D SLAM algorithms, and the accuracy of map building will be further improved with the development of RGB-D SLAM algorithms.


Vehicles ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 778-789
Author(s):  
Leonard Bauersfeld ◽  
Guillaume Ducard

RTOB-SLAM is a new low-computation framework for real-time onboard simultaneous localization and mapping (SLAM) and obstacle avoidance for autonomous vehicles. A low-resolution 2D laser scanner is used and a small form-factor computer perform all computations onboard. The SLAM process is based on laser scan matching with the iterative closest point technique to estimate the vehicle’s current position by aligning the new scan with the map. This paper describes a new method which uses only a small subsample of the global map for scan matching, which improves the performance and allows for a map to adapt to a dynamic environment by partly forgetting the past. A detailed comparison between this method and current state-of-the-art SLAM frameworks is given, together with a methodology to choose the parameters of the RTOB-SLAM. The RTOB-SLAM has been implemented in ROS and perform well in various simulations and real experiments.


2021 ◽  
Vol 180 ◽  
pp. 191-208
Author(s):  
Pengxin Chen ◽  
Wenzhong Shi ◽  
Wenzheng Fan ◽  
Haodong Xiang ◽  
Sheng Bao

Author(s):  
J. Li ◽  
Y. Zhang ◽  
Q. Hu

Abstract. Robust estimation (RE) is a fundamental issue in robot vision and photogrammetry, which is the theoretical basis of geometric model estimation with outliers. However, M-estimations solved by iteratively reweighted least squares (IRLS) are only suitable for cases with low outlier rates (< 50%); random sample consensus (RANSAC) can only obtain approximate solutions. In this paper, we propose an accurate and general RE model that unifies various robust costs into a common objective function by introducing a “robustness-control” parameter. It is a superset of typical least-squares, l1-l2, Cauchy, and Geman-McClure estimates. We introduce a parameter-decreasing strategy into the IRLS to optimize our model, called adaptive IRLS. The adaptive IRLS begins with a least-squares estimate for initialization. Then, the “robustness-control” parameter is decreased along with iterations so that the proposed model acts as different robust loss functions and has different degrees of robustness. We also apply the proposed model in several important tasks of robot vision and photogrammetry, such as line fitting, feature matching, image orientation, and point cloud registration (scan matching). Extensive simulated and real experiments show that the proposed model is robust to more than 80% outliers and preserves the advantages of M-estimations (fast and optimal). Our source code will be made publicly available in https://ljy-rs.github.io/web.


Sign in / Sign up

Export Citation Format

Share Document