scholarly journals Active Mapping and Robot Exploration: A Survey

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2445
Author(s):  
Iker Lluvia ◽  
Elena Lazkano ◽  
Ander Ansuategi

Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.

Robotics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 40 ◽  
Author(s):  
Malcolm Mielle ◽  
Martin Magnusson ◽  
Achim J. Lilienthal

Simultaneous Localization And Mapping (SLAM) usually assumes the robot starts without knowledge of the environment. While prior information, such as emergency maps or layout maps, is often available, integration is not trivial since such maps are often out of date and have uncertainty in local scale. Integration of prior map information is further complicated by sensor noise, drift in the measurements, and incorrect scan registrations in the sensor map. We present the Auto-Complete Graph (ACG), a graph-based SLAM method merging elements of sensor and prior maps into one consistent representation. After optimizing the ACG, the sensor map’s errors are corrected thanks to the prior map, while the sensor map corrects the local scale inaccuracies in the prior map. We provide three datasets with associated prior maps: two recorded in campus environments, and one from a fireman training facility. Our method handled up to 40% of noise in odometry, was robust to varying levels of details between the prior and the sensor map, and could correct local scale errors of the prior. In field tests with ACG, users indicated points of interest directly on the prior before exploration. We did not record failures in reaching them.


Robotics ◽  
2018 ◽  
Vol 7 (3) ◽  
pp. 45 ◽  
Author(s):  
Chang Chen ◽  
Hua Zhu ◽  
Menggang Li ◽  
Shaoze You

Visual-inertial simultaneous localization and mapping (VI-SLAM) is popular research topic in robotics. Because of its advantages in terms of robustness, VI-SLAM enjoys wide applications in the field of localization and mapping, including in mobile robotics, self-driving cars, unmanned aerial vehicles, and autonomous underwater vehicles. This study provides a comprehensive survey on VI-SLAM. Following a short introduction, this study is the first to review VI-SLAM techniques from filtering-based and optimization-based perspectives. It summarizes state-of-the-art studies over the last 10 years based on the back-end approach, camera type, and sensor fusion type. Key VI-SLAM technologies are also introduced such as feature extraction and tracking, core theory, and loop closure. The performance of representative VI-SLAM methods and famous VI-SLAM datasets are also surveyed. Finally, this study contributes to the comparison of filtering-based and optimization-based methods through experiments. A comparative study of VI-SLAM methods helps understand the differences in their operating principles. Optimization-based methods achieve excellent localization accuracy and lower memory utilization, while filtering-based methods have advantages in terms of computing resources. Furthermore, this study proposes future development trends and research directions for VI-SLAM. It provides a detailed survey of VI-SLAM techniques and can serve as a brief guide to newcomers in the field of SLAM and experienced researchers looking for possible directions for future work.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2068 ◽  
Author(s):  
César Debeunne ◽  
Damien Vivet

Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR approaches, whereas such a fusion would have many advantages. Indeed, hybridized solutions offer improvements in the performance of SLAM, especially with respect to aggressive motion, lack of light, or lack of visual features. This study provides a comprehensive survey on visual-LiDAR SLAM. After a summary of the basic idea of SLAM and its implementation, we give a complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities.


2021 ◽  
Vol 229 ◽  
pp. 01023
Author(s):  
Rachid Latif ◽  
Kaoutar Dahmane ◽  
Monir Amraoui ◽  
Amine Saddik ◽  
Abdelouahed Elouardi

Localization and mapping are a real problem in robotics which has led the robotics community to propose solutions for this problem... Among the competitive axes of mobile robotics there is the autonomous navigation based on simultaneous localization and mapping (SLAM) algorithms: in order to have the capacity to track the localization and the cartography of robots, that give the machines the power to move in an autonomous environment. In this work we propose an implementation of the bio-inspired SLAM algorithm RatSLAM based on a heterogeneous system type CPU-GPU. The evaluation of the algorithm showed that with C/C++ we have an executing time of 170.611 ms with a processing of 5 frames/s and for the implementation on a heterogeneous system we used CUDA as language with an execution time of 160.43 ms.


Author(s):  
Zewen Xu ◽  
Zheng Rong ◽  
Yihong Wu

AbstractIn recent years, simultaneous localization and mapping in dynamic environments (dynamic SLAM) has attracted significant attention from both academia and industry. Some pioneering work on this technique has expanded the potential of robotic applications. Compared to standard SLAM under the static world assumption, dynamic SLAM divides features into static and dynamic categories and leverages each type of feature properly. Therefore, dynamic SLAM can provide more robust localization for intelligent robots that operate in complex dynamic environments. Additionally, to meet the demands of some high-level tasks, dynamic SLAM can be integrated with multiple object tracking. This article presents a survey on dynamic SLAM from the perspective of feature choices. A discussion of the advantages and disadvantages of different visual features is provided in this article.


2020 ◽  
Vol 1682 ◽  
pp. 012049
Author(s):  
Jianjie Zhenga ◽  
Haitao Zhang ◽  
Kai Tang ◽  
Weidi Kong

Automation ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 48-61
Author(s):  
Bhavyansh Mishra ◽  
Robert Griffin ◽  
Hakki Erhan Sevil

Visual simultaneous localization and mapping (VSLAM) is an essential technique used in areas such as robotics and augmented reality for pose estimation and 3D mapping. Research on VSLAM using both monocular and stereo cameras has grown significantly over the last two decades. There is, therefore, a need for emphasis on a comprehensive review of the evolving architecture of such algorithms in the literature. Although VSLAM algorithm pipelines share similar mathematical backbones, their implementations are individualized and the ad hoc nature of the interfacing between different modules of VSLAM pipelines complicates code reuseability and maintenance. This paper presents a software model for core components of VSLAM implementations and interfaces that govern data flow between them while also attempting to preserve the elements that offer performance improvements over the evolution of VSLAM architectures. The framework presented in this paper employs principles from model-driven engineering (MDE), which are used extensively in the development of large and complicated software systems. The presented VSLAM framework will assist researchers in improving the performance of individual modules of VSLAM while not having to spend time on system integration of those modules into VSLAM pipelines.


Sign in / Sign up

Export Citation Format

Share Document