Enhanced Simultaneous Localization and Mapping (ESLAM) for Mobile Robots

2017 ◽  
Vol 14 (02) ◽  
pp. 1750007 ◽  
Author(s):  
Chen-Chien Hsu ◽  
Wei-Yen Wang ◽  
Tung-Yuan Lin ◽  
Yin-Tien Wang ◽  
Teng-Wei Huang

FastSLAM, such as FastSLAM 1.0 and FastSLAM 2.0, is a popular algorithm to solve the simultaneous localization and mapping (SLAM) problem for mobile robots. In real environments, however, the execution speed by FastSLAM would be too slow to achieve the objective of real-time design with a satisfactory accuracy because of excessive comparisons of the measurement with all the existing landmarks in particles, particularly when the number of landmarks is drastically increased. In this paper, an enhanced SLAM (ESLAM) is proposed, which uses not only odometer information but also sensor measurements to estimate the robot’s pose in the prediction step. Landmark information that has the maximum likelihood is then used to update the robot’s pose before updating the landmarks’ location. Compared to existing FastSLAM algorithms, the proposed ESLAM algorithm has a better performance in terms of computation efficiency as well as localization and mapping accuracy as demonstrated in the illustrated examples.

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 520
Author(s):  
Guanghui Xue ◽  
Jinbo Wei ◽  
Ruixue Li ◽  
Jian Cheng

Simultaneous localization and mapping (SLAM) is one of the key technologies for coal mine underground operation vehicles to build complex environment maps and positioning and to realize unmanned and autonomous operation. Many domestic and foreign scholars have studied many SLAM algorithms, but the mapping accuracy and real-time performance still need to be further improved. This paper presents a SLAM algorithm integrating scan context and Light weight and Ground-Optimized LiDAR Odometry and Mapping (LeGO-LOAM), LeGO-LOAM-SC. The algorithm uses the global descriptor extracted by scan context for loop detection, adds pose constraints to Georgia Tech Smoothing and Mapping (GTSAM) by Iterative Closest Points (ICP) for graph optimization, and constructs point cloud map and an output estimated pose of the mobile vehicle. The test with KITTI dataset 00 sequence data and the actual test in 2-storey underground parking lots are carried out. The results show that the proposed improved algorithm makes up for the drift of the point cloud map, has a higher mapping accuracy, a better real-time performance, a lower resource occupancy, a higher coincidence between trajectory estimation and real trajectory, smoother loop, and 6% reduction in CPU occupancy, the mean square errors of absolute trajectory error (ATE) and relative pose error (RPE) are reduced by 55.7% and 50.3% respectively; the translation and rotation accuracy are improved by about 5%, and the time consumption is reduced by 2~4%. Accurate map construction and low drift pose estimation can be performed.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2106
Author(s):  
Ahmed Afifi ◽  
Chisato Takada ◽  
Yuichiro Yoshimura ◽  
Toshiya Nakaguchi

Minimally invasive surgery is widely used because of its tremendous benefits to the patient. However, there are some challenges that surgeons face in this type of surgery, the most important of which is the narrow field of view. Therefore, we propose an approach to expand the field of view for minimally invasive surgery to enhance surgeons’ experience. It combines multiple views in real-time to produce a dynamic expanded view. The proposed approach extends the monocular Oriented features from an accelerated segment test and Rotated Binary robust independent elementary features—Simultaneous Localization And Mapping (ORB-SLAM) to work with a multi-camera setup. The ORB-SLAM’s three parallel threads, namely tracking, mapping and loop closing, are performed for each camera and new threads are added to calculate the relative cameras’ pose and to construct the expanded view. A new algorithm for estimating the optimal inter-camera correspondence matrix from a set of corresponding 3D map points is presented. This optimal transformation is then used to produce the final view. The proposed approach was evaluated using both human models and in vivo data. The evaluation results of the proposed correspondence matrix estimation algorithm prove its ability to reduce the error and to produce an accurate transformation. The results also show that when other approaches fail, the proposed approach can produce an expanded view. In this work, a real-time dynamic field-of-view expansion approach that can work in all situations regardless of images’ overlap is proposed. It outperforms the previous approaches and can also work at 21 fps.


2017 ◽  
Vol 36 (12) ◽  
pp. 1363-1386 ◽  
Author(s):  
Patrick McGarey ◽  
Kirk MacTavish ◽  
François Pomerleau ◽  
Timothy D Barfoot

Tethered mobile robots are useful for exploration in steep, rugged, and dangerous terrain. A tether can provide a robot with robust communications, power, and mechanical support, but also constrains motion. In cluttered environments, the tether will wrap around a number of intermediate ‘anchor points’, complicating navigation. We show that by measuring the length of tether deployed and the bearing to the most recent anchor point, we can formulate a tethered simultaneous localization and mapping (TSLAM) problem that allows us to estimate the pose of the robot and the positions of the anchor points, using only low-cost, nonvisual sensors. This information is used by the robot to safely return along an outgoing trajectory while avoiding tether entanglement. We are motivated by TSLAM as a building block to aid conventional, camera, and laser-based approaches to simultaneous localization and mapping (SLAM), which tend to fail in dark and or dusty environments. Unlike conventional range-bearing SLAM, the TSLAM problem must account for the fact that the tether-length measurements are a function of the robot’s pose and all the intermediate anchor-point positions. While this fact has implications on the sparsity that can be exploited in our method, we show that a solution to the TSLAM problem can still be found and formulate two approaches: (i) an online particle filter based on FastSLAM and (ii) an efficient, offline batch solution. We demonstrate that either method outperforms odometry alone, both in simulation and in experiments using our TReX (Tethered Robotic eXplorer) mobile robot operating in flat-indoor and steep-outdoor environments. For the indoor experiment, we compare each method using the same dataset with ground truth, showing that batch TSLAM outperforms particle-filter TSLAM in localization and mapping accuracy, owing to superior anchor-point detection, data association, and outlier rejection.


Sign in / Sign up

Export Citation Format

Share Document