scholarly journals Multi-Robot 2.5D Localization and Mapping Using a Monte Carlo Algorithm on a Multi-Level Surface

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4588
Author(s):  
Vinicio Alejandro Rosas-Cervantes ◽  
Quoc-Dong Hoang ◽  
Soon-Geul Lee ◽  
Jae-Hwan Choi

Most indoor environments have wheelchair adaptations or ramps, providing an opportunity for mobile robots to navigate sloped areas avoiding steps. These indoor environments with integrated sloped areas are divided into different levels. The multi-level areas represent a challenge for mobile robot navigation due to the sudden change in reference sensors as visual, inertial, or laser scan instruments. Using multiple cooperative robots is advantageous for mapping and localization since they permit rapid exploration of the environment and provide higher redundancy than using a single robot. This study proposes a multi-robot localization using two robots (leader and follower) to perform a fast and robust environment exploration on multi-level areas. The leader robot is equipped with a 3D LIDAR for 2.5D mapping and a Kinect camera for RGB image acquisition. Using 3D LIDAR, the leader robot obtains information for particle localization, with particles sampled from the walls and obstacle tangents. We employ a convolutional neural network on the RGB images for multi-level area detection. Once the leader robot detects a multi-level area, it generates a path and sends a notification to the follower robot to go into the detected location. The follower robot utilizes a 2D LIDAR to explore the boundaries of the even areas and generate a 2D map using an extension of the iterative closest point. The 2D map is utilized as a re-localization resource in case of failure of the leader robot.

2019 ◽  
Vol 39 (2) ◽  
pp. 297-307 ◽  
Author(s):  
Haoyao Chen ◽  
Hailin Huang ◽  
Ye Qin ◽  
Yanjie Li ◽  
Yunhui Liu

Purpose Multi-robot laser-based simultaneous localization and mapping (SLAM) in large-scale environments is an essential but challenging issue in mobile robotics, especially in situations wherein no prior knowledge is available between robots. Moreover, the cumulative errors of every individual robot exert a serious negative effect on loop detection and map fusion. To address these problems, this paper aims to propose an efficient approach that combines laser and vision measurements. Design/methodology/approach A multi-robot visual laser-SLAM is developed to realize robust and efficient SLAM in large-scale environments; both vision and laser loop detections are integrated to detect robust loops. A method based on oriented brief (ORB) feature detection and bag of words (BoW) is developed, to ensure the robustness and computational effectiveness of the multi-robot SLAM system. A robust and efficient graph fusion algorithm is proposed to merge pose graphs from different robots. Findings The proposed method can detect loops more quickly and accurately than the laser-only SLAM, and it can fuse the submaps of each single robot to promote the efficiency, accuracy and robustness of the system. Originality/value Compared with the state of art of multi-robot SLAM approaches, the paper proposed a novel and more sophisticated approach. The vision-based and laser-based loops are integrated to realize a robust loop detection. The ORB features and BoW technologies are further utilized to gain real-time performance. Finally, random sample consensus and least-square methodologies are used to remove the outlier loops among robots.


Author(s):  
Zijing Zhang ◽  
Fei Zhang ◽  
Chuantang Ji

Abstract In order to improve the Simultaneous Localization and Mapping (SLAM) accuracy of mobile robots in complex indoor environments, the multi-robot cardinality balanced Multi-Bernoulli filter SLAM method (MR-CBMber-SLAM) is proposed. First of all, this method introduces a Multi-Bernoulli filter based on the random finite set (RFS) theory to solve the complex data association problem. Besides, this method aims at the problem that the Multi-Bernoulli filter will overestimate in the aspect of SLAM map features estimation, and combines the strategy of cardinality balanced with the Multi-Bernoulli filter. What’s more, in order to further improve the accuracy and operating efficiency of SLAM, a multi-robot strategy and a multi-robot Gaussian information fusion (MR-GIF) method are proposed. In the experiment, the MR-CBMber-SLAM method is compared with the multi-vehicle Probability Hypothesis Density SLAM (MV-PHD-SLAM) method. The experimental results show that the MR-CBMber-SLAM method is better than MV-PHD-SLAM method. Therefore, it effectively verifies that the MR-CBMber-SLAM method is more adaptable to the complex indoor environment.


2021 ◽  
Vol 11 (13) ◽  
pp. 5963
Author(s):  
Phuc Thanh-Thien Nguyen ◽  
Shao-Wei Yan ◽  
Jia-Fu Liao ◽  
Chung-Hsien Kuo

In the industrial environment, Autonomous Guided Vehicles (AGVs) generally run on a planned route. Among trajectory-tracking algorithms for unmanned vehicles, the Pure Pursuit (PP) algorithm is prevalent in many real-world applications because of its simple and easy implementation. However, it is challenging to decelerate the AGV’s moving speed when turning on a large curve path. Moreover, this paper addresses the kidnapped-robot problem occurring in spare LiDAR environments. This paper proposes an improved Pure Pursuit algorithm so that the AGV can predict the trajectory and decelerate for turning, thus increasing the accuracy of the path tracking. To solve the kidnapped-robot problem, we use a learning-based classifier to detect the repetitive pattern scenario (e.g., long corridor) regarding 2D LiDAR features for switching the localization system between Simultaneous Localization And Mapping (SLAM) method and Odometer method. As experimental results in practice, the improved Pure Pursuit algorithm can reduce the tracking error while performing more efficiently. Moreover, the learning-based localization selection strategy helps the robot navigation task achieve stable performance, with 36.25% in completion rate more than only using SLAM. The results demonstrate that the proposed method is feasible and reliable in actual conditions.


10.5772/50920 ◽  
2012 ◽  
Vol 9 (1) ◽  
pp. 25 ◽  
Author(s):  
Kolja Kühnlenz ◽  
Martin Buss

Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.


Author(s):  
Sajad Badalkhani ◽  
Ramazan Havangi ◽  
Mohsen Farshad

There is an extensive literature regarding multi-robot simultaneous localization and mapping (MRSLAM). In most part of the research, the environment is assumed to be static, while the dynamic parts of the environment degrade the estimation quality of SLAM algorithms and lead to inherently fragile systems. To enhance the performance and robustness of the SLAM in dynamic environments (SLAMIDE), a novel cooperative approach named parallel-map (p-map) SLAM is introduced in this paper. The objective of the proposed method is to deal with the dynamics of the environment, by detecting dynamic parts and preventing the inclusion of them in SLAM estimations. In this approach, each robot builds a limited map in its own vicinity, while the global map is built through a hybrid centralized MRSLAM. The restricted size of the local maps, bounds computational complexity and resources needed to handle a large scale dynamic environment. Using a probabilistic index, the proposed method differentiates between stationary and moving landmarks, based on their relative positions with other parts of the environment. Stationary landmarks are then used to refine a consistent map. The proposed method is evaluated with different levels of dynamism and for each level, the performance is measured in terms of accuracy, robustness, and hardware resources needed to be implemented. The method is also evaluated with a publicly available real-world data-set. Experimental validation along with simulations indicate that the proposed method is able to perform consistent SLAM in a dynamic environment, suggesting its feasibility for MRSLAM applications.


Sign in / Sign up

Export Citation Format

Share Document