scholarly journals Parallel Tracking and Mapping for Controlling VTOL Airframe

2011 ◽  
Vol 2011 ◽  
pp. 1-10 ◽  
Author(s):  
Michal Jama ◽  
Dale Schinstock

This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV). This is a monocular vision based, simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2) improved performance of the SLAM algorithm for lower camera frame rates; and (3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.

2017 ◽  
Vol 9 (4) ◽  
pp. 283-296 ◽  
Author(s):  
Sarquis Urzua ◽  
Rodrigo Munguía ◽  
Antoni Grau

Using a camera, a micro aerial vehicle (MAV) can perform visual-based navigation in periods or circumstances when GPS is not available, or when it is partially available. In this context, the monocular simultaneous localization and mapping (SLAM) methods represent an excellent alternative, due to several limitations regarding to the design of the platform, mobility and payload capacity that impose considerable restrictions on the available computational and sensing resources of the MAV. However, the use of monocular vision introduces some technical difficulties as the impossibility of directly recovering the metric scale of the world. In this work, a novel monocular SLAM system with application to MAVs is proposed. The sensory input is taken from a monocular downward facing camera, an ultrasonic range finder and a barometer. The proposed method is based on the theoretical findings obtained from an observability analysis. Experimental results with real data confirm those theoretical findings and show that the proposed method is capable of providing good results with low-cost hardware.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Ruwan Egodagamage ◽  
Mihran Tuceryan

Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.


2011 ◽  
Vol 366 ◽  
pp. 90-94
Author(s):  
Ying Min YI ◽  
Yu Hui

How to identify objects is a hot issue of robot simultaneous localization and mapping (SLAM) with monocular vision. In this paper, an algorithm of wheeled robot’s simultaneous localization and mapping with identification of landmarks based on monocular vision is proposed. In observation steps, identifying landmarks and locating position are performed by image processing and analyzing, which converts vision image projection of wheeled robots and geometrical relations of spatial objects into calculating robots’ relative landmarks distance and angle. The integral algorithm procedure follows the recursive order of prediction, observation, data association, update, mapping to have simultaneous localization and map building. Compared with Active Vision algorithm, Three dimensional vision and stereo vision algorithm, the proposed algorithm is able to identify environmental objects and conduct smooth movement as well.


2011 ◽  
Vol 23 (2) ◽  
pp. 292-301 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume ◽  
Shinji Suzuki ◽  
...  

This paper describes a Simultaneous Localization And Mapping (SLAM) algorithm using a monocular camera for a small Unmanned Aerial Vehicle (UAV). A small UAV has attracted the attention for effective means of the collecting aerial information. However, there are few practical applications due to its small payloads for the 3D measurement. We propose extended Kalman filter SLAM to increase UAV position and attitude data and to construct 3D terrain maps using a small monocular camera. We propose 3D measurement based on Scale-Invariant Feature Transform (SIFT) triangulation features extracted from captured images. Field-experiment results show that our proposal effectively estimates position and attitude of the UAV and construct the 3D terrain map.


2002 ◽  
Vol 21 (10-11) ◽  
pp. 829-848 ◽  
Author(s):  
Héctor H. González-Baños ◽  
Jean-Claude Latombe

In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds apolygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensor-placement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the next-best-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs.


2018 ◽  
Vol 25 (1) ◽  
pp. 137-153
Author(s):  
Piotr Kaniewski ◽  
Paweł Słowak

AbstractThe paper describes a problem and an algorithm for simultaneous localization and mapping (SLAM) for an unmanned aerial vehicle (UAV). The algorithm developed by the authors estimates the flight trajectory and builds a map of the terrain below the UAV. As a tool for estimating the UAV position and other parameters of flight, a particle filter was applied. The proposed algorithm was tested and analyzed by simulations and the paper presents a simulator developed by the authors and used for SLAM testing purposes. Chosen simulation results, including maps and UAV trajectories constructed by the SLAM algorithm are included in the paper.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2795
Author(s):  
Lahemer ◽  
Rad

In this paper, the problem of Simultaneous Localization And Mapping (SLAM) is addressed via a novel augmented landmark vision-based ellipsoidal SLAM. The algorithm is implemented on a NAO humanoid robot and is tested in an indoor environment. The main feature of the system is the implementation of SLAM with a monocular vision system. Distinguished landmarks referred to as NAOmarks are employed to localize the robot via its monocular vision system. We henceforth introduce the notion of robotic augmented reality (RAR) and present a monocular Extended Kalman Filter (EKF)/ellipsoidal SLAM in order to improve the performance and alleviate the computational effort, to provide landmark identification, and to simplify the data association problem. The proposed SLAM algorithm is implemented in real-time to further calibrate the ellipsoidal SLAM parameters, noise bounding, and to improve its overall accuracy. The augmented EKF/ellipsoidal SLAM algorithms are compared with the regular EKF/ellipsoidal SLAM methods and the merits of each algorithm is also discussed in the paper. The real-time experimental and simulation studies suggest that the adaptive augmented ellipsoidal SLAM is more accurate than the conventional EKF/ellipsoidal SLAMs.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4973 ◽  
Author(s):  
Dániel Kiss-Illés ◽  
Cristina Barrado ◽  
Esther Salamí

This work presents Global Positioning System-Simultaneous Localization and Mapping (GPS-SLAM), an augmented version of Oriented FAST (Features from accelerated segment test) and Rotated BRIEF (Binary Robust Independent Elementary Features) feature detector (ORB)-SLAM using GPS and inertial data to make the algorithm capable of dealing with low frame rate datasets. In general, SLAM systems are successful in case of datasets with a high frame rate. This work was motivated by a scarce dataset where ORB-SLAM often loses track because of the lack of continuity. The main work includes the determination of the next frame’s pose based on the GPS and inertial data. The results show that this additional information makes the algorithm more robust. As many large, outdoor unmanned aerial vehicle (UAV) flights save the GPS and inertial measurement unit (IMU) data of the capturing of images, this program gives an option to use the SLAM algorithm successfully even if the dataset has a low frame rate.


2020 ◽  
Vol 12 (19) ◽  
pp. 3185
Author(s):  
Ehsan Khoramshahi ◽  
Raquel A. Oliveira ◽  
Niko Koivumäki ◽  
Eija Honkavaara

Simultaneous localization and mapping (SLAM) of a monocular projective camera installed on an unmanned aerial vehicle (UAV) is a challenging task in photogrammetry, computer vision, and robotics. This paper presents a novel real-time monocular SLAM solution for UAV applications. It is based on two steps: consecutive construction of the UAV path, and adjacent strip connection. Consecutive construction rapidly estimates the UAV path by sequentially connecting incoming images to a network of connected images. A multilevel pyramid matching is proposed for this step that contains a sub-window matching using high-resolution images. The sub-window matching increases the frequency of tie points by propagating locations of matched sub-windows that leads to a list of high-frequency tie points while keeping the execution time relatively low. A sparse bundle block adjustment (BBA) is employed to optimize the initial path by considering nuisance parameters. System calibration parameters with respect to global navigation satellite system (GNSS) and inertial navigation system (INS) are optionally considered in the BBA model for direct georeferencing. Ground control points and checkpoints are optionally included in the model for georeferencing and quality control. Adjacent strip connection is enabled by an overlap analysis to further improve connectivity of local networks. A novel angular parametrization based on spherical rotation coordinate system is presented to address the gimbal lock singularity of BBA. Our results suggest that the proposed scheme is a precise real-time monocular SLAM solution for a UAV.


Sign in / Sign up

Export Citation Format

Share Document