scholarly journals A Novel Feature-Based Approach for Indoor Monocular SLAM

Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 305 ◽  
Author(s):  
Seyyed Hoseini ◽  
Peyman Kabiri

Camera tracking and the construction of a robust and accurate map in unknown environments are still challenging tasks in computer vision and robotic applications. Visual Simultaneous Localization and Mapping (SLAM) along with Augmented Reality (AR) are two important applications, and their performance is entirely dependent on the accuracy of the camera tracking routine. This paper presents a novel feature-based approach for the monocular SLAM problem using a hand-held camera in room-sized workspaces with a maximum scene depth of 4–5 m. In the core of the proposed method, there is a Particle Filter (PF) responsible for the estimation of extrinsic parameters of the camera. In addition, contrary to key-frame based methods, the proposed system tracks the camera frame by frame and constructs a robust and accurate map incrementally. Moreover, the proposed algorithm initially constructs a metric sparse map. To this end, a chessboard pattern with a known cell size has been placed in front of the camera for a few frames. This enables the algorithm to accurately compute the pose of the camera and therefore, the depth of the primary detected natural feature points are easily calculated. Afterwards, camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Moreover, to recover the depth of the newly detected landmarks, a delayed approach based on linear triangulation is used. The proposed method is applied to a realworld VGA quality video (640 × 480 pixels) where the translation error of the camera pose is less than 2 cm on average and the orientation error is less than 3 degrees, which indicates the effectiveness and accuracy of the developed algorithm.

Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 161 ◽  
Author(s):  
Junqiao Zhao ◽  
Yewei Huang ◽  
Xudong He ◽  
Shaoming Zhang ◽  
Chen Ye ◽  
...  

Autonomous parking in an indoor parking lot without human intervention is one of the most demanded and challenging tasks of autonomous driving systems. The key to this task is precise real-time indoor localization. However, state-of-the-art low-level visual feature-based simultaneous localization and mapping systems (VSLAM) suffer in monotonous or texture-less scenes and under poor illumination or dynamic conditions. Additionally, low-level feature-based mapping results are hard for human beings to use directly. In this paper, we propose a semantic landmark-based robust VSLAM for real-time localization of autonomous vehicles in indoor parking lots. The parking slots are extracted as meaningful landmarks and enriched with confidence levels. We then propose a robust optimization framework to solve the aliasing problem of semantic landmarks by dynamically eliminating suboptimal constraints in the pose graph and correcting erroneous parking slots associations. As a result, a semantic map of the parking lot, which can be used by both autonomous driving systems and human beings, is established automatically and robustly. We evaluated the real-time localization performance using multiple autonomous vehicles, and an repeatability of 0.3 m track tracing was achieved at a 10 kph of autonomous driving.


2017 ◽  
Vol 9 (4) ◽  
pp. 283-296 ◽  
Author(s):  
Sarquis Urzua ◽  
Rodrigo Munguía ◽  
Antoni Grau

Using a camera, a micro aerial vehicle (MAV) can perform visual-based navigation in periods or circumstances when GPS is not available, or when it is partially available. In this context, the monocular simultaneous localization and mapping (SLAM) methods represent an excellent alternative, due to several limitations regarding to the design of the platform, mobility and payload capacity that impose considerable restrictions on the available computational and sensing resources of the MAV. However, the use of monocular vision introduces some technical difficulties as the impossibility of directly recovering the metric scale of the world. In this work, a novel monocular SLAM system with application to MAVs is proposed. The sensory input is taken from a monocular downward facing camera, an ultrasonic range finder and a barometer. The proposed method is based on the theoretical findings obtained from an observability analysis. Experimental results with real data confirm those theoretical findings and show that the proposed method is capable of providing good results with low-cost hardware.


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142090419 ◽  
Author(s):  
Baofu Fang ◽  
Zhiqiang Zhan

Visual simultaneous localization and mapping (SLAM) is well-known to be one of the research areas in robotics. There are many challenges in traditional point feature-based approaches, such as insufficient point features, motion jitter, and low localization accuracy in low-texture scenes, which reduce the performance of the algorithms. In this article, we propose an RGB-D SLAM system to handle these situations, which is named Point-Line Fusion (PLF)-SLAM. We utilize both points and line segments throughout the process of our work. Specifically, we present a new line segment extraction method to solve the overlap or branch problem of the line segments, and then a more rigorous screening mechanism is proposed in the line matching section. Instead of minimizing the reprojection error of points, we introduce the reprojection error based on points and lines to get a more accurate tracking pose. In addition, we come up with a solution to handle the jitter frame, which greatly improves tracking success rate and availability of the system. We thoroughly evaluate our system on the Technische Universität München (TUM) RGB-D benchmark and compare it with ORB-SLAM2, presumably the current state-of-the-art solution. The experiments show that our system has better accuracy and robustness compared to the ORB-SLAM2.


2018 ◽  
Vol 28 (3) ◽  
pp. 505-519
Author(s):  
Demeng Li ◽  
Jihong Zhua ◽  
Benlian Xu ◽  
Mingli Lu ◽  
Mingyue Li

Abstract Inspired by ant foraging, as well as modeling of the feature map and measurements as random finite sets, a novel formulation in an ant colony framework is proposed to jointly estimate the map and the vehicle trajectory so as to solve a feature-based simultaneous localization and mapping (SLAM) problem. This so-called ant-PHD-SLAM algorithm allows decomposing the recursion for the joint map-trajectory posterior density into a jointly propagated posterior density of the vehicle trajectory and the posterior density of the feature map conditioned on the vehicle trajectory. More specifically, an ant-PHD filter is proposed to jointly estimate the number of map features and their locations, namely, using the powerful search ability and collective cooperation of ants to complete the PHD-SLAM filter time prediction and data update process. Meanwhile, a novel fast moving ant estimator (F-MAE) is utilized to estimate the maneuvering vehicle trajectory. Evaluation and comparison using several numerical examples show a performance improvement over recently reported approaches. Moreover, the experimental results based on the robot operation system (ROS) platform validate the consistency with the results obtained from numerical simulations.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6988
Author(s):  
Shuien Yu ◽  
Chunyun Fu ◽  
Amirali K. Gostar ◽  
Minghui Hu

When multiple robots are involved in the process of simultaneous localization and mapping (SLAM), a global map should be constructed by merging the local maps built by individual robots, so as to provide a better representation of the environment. Hence, the map-merging methods play a crucial rule in multi-robot systems and determine the performance of multi-robot SLAM. This paper looks into the key problem of map merging for multiple-ground-robot SLAM and reviews the typical map-merging methods for several important types of maps in SLAM applications: occupancy grid maps, feature-based maps, and topological maps. These map-merging approaches are classified based on their working mechanism or the type of features they deal with. The concepts and characteristics of these map-merging methods are elaborated in this review. The contents summarized in this paper provide insights and guidance for future multiple-ground-robot SLAM solutions.


2012 ◽  
Vol 2012 ◽  
pp. 1-26 ◽  
Author(s):  
Rodrigo Munguía ◽  
Antoni Grau

This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM) system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots ina prioriunknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom) monocular camera case (monocular SLAM) possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.


Sign in / Sign up

Export Citation Format

Share Document