scholarly journals Minimalistic approach for monocular SLAM system applied to micro aerial vehicles in GPS-denied environments

2018 ◽  
Vol 40 (16) ◽  
pp. 4345-4357 ◽  
Author(s):  
Sarquis Urzua ◽  
Rodrigo Munguía ◽  
Emmanuel Nuño ◽  
Antoni Grau

In this work, a novel monocular simultaneous localization and mapping (SLAM) system with application to micro aerial vehicles is proposed. The main difference with respect to previous approaches is that a barometer is used as a unique sensory aid for incorporating altitude information into the system in order to recover an absolute metric scale. First, an observability analysis of a simplified model of a monocular SLAM system is developed. From this analysis, several theoretical results are derived. Among others, one important result is related to the fact that the metric scale can become observable when measurements of altitude are included in the system. In this case, sufficient conditions for observability are presented. The design of the proposed method is based on these theoretical results. Simulations and experiments with real data are presented to validate the proposed approach. The results confirm that the metric scale can be retrieved by including altitude measurements in the system. It is also shown that the proposed method can be practically implemented, using low-cost sensors, to perform visual-based navigation in GPS-denied environments.

2017 ◽  
Vol 9 (4) ◽  
pp. 283-296 ◽  
Author(s):  
Sarquis Urzua ◽  
Rodrigo Munguía ◽  
Antoni Grau

Using a camera, a micro aerial vehicle (MAV) can perform visual-based navigation in periods or circumstances when GPS is not available, or when it is partially available. In this context, the monocular simultaneous localization and mapping (SLAM) methods represent an excellent alternative, due to several limitations regarding to the design of the platform, mobility and payload capacity that impose considerable restrictions on the available computational and sensing resources of the MAV. However, the use of monocular vision introduces some technical difficulties as the impossibility of directly recovering the metric scale of the world. In this work, a novel monocular SLAM system with application to MAVs is proposed. The sensory input is taken from a monocular downward facing camera, an ultrasonic range finder and a barometer. The proposed method is based on the theoretical findings obtained from an observability analysis. Experimental results with real data confirm those theoretical findings and show that the proposed method is capable of providing good results with low-cost hardware.


Sensors ◽  
2017 ◽  
Vol 17 (4) ◽  
pp. 802 ◽  
Author(s):  
Elena López ◽  
Sergio García ◽  
Rafael Barea ◽  
Luis Bergasa ◽  
Eduardo Molinos ◽  
...  

2012 ◽  
Vol 2012 ◽  
pp. 1-26 ◽  
Author(s):  
Rodrigo Munguía ◽  
Antoni Grau

This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM) system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots ina prioriunknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom) monocular camera case (monocular SLAM) possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 830
Author(s):  
Guilherme Marcel Dias Santana ◽  
Rogers Silva de Cristo ◽  
Kalinka Regina Lucas Jaquie Castelo Branco

Unmanned Aerial Vehicles (UAVs) demand technologies so they can not only fly autonomously, but also communicate with base stations, flight controllers, computers, devices, or even other UAVs. Still, UAVs usually operate within unlicensed spectrum bands, competing against the increasing number of mobile devices and other wireless networks. Combining UAVs with Cognitive Radio (CR) may increase their general communication performance, thus allowing them to execute missions where the conventional UAVs face limitations. CR provides a smart wireless communication which, instead of using a transmission frequency defined in the hardware, uses software transmission. CR smartly uses free transmission channels and/or chooses them according to application’s requirements. Moreover, CR is considered a key enabler for deploying technologies that require high connectivity, such as Smart Cities, 5G, Internet of Things (IoT), and the Internet of Flying Things (IoFT). This paper presents an overview on the field of CR for UAV communications and its state-of-the-art, testbed alternatives for real data experiments, as well as specifications to build a simple and low-cost testbed, and indicates key opportunities and future challenges in the field.


Proceedings ◽  
2018 ◽  
Vol 4 (1) ◽  
pp. 44 ◽  
Author(s):  
Ankit Ravankar ◽  
Abhijeet Ravankar ◽  
Yukinori Kobayashi ◽  
Takanori Emaru

Mapping and exploration are important tasks of mobile robots for various applications such as search and rescue, inspection, and surveillance. Unmanned aerial vehicles (UAVs) are more suited for such tasks because they have a large field of view compared to ground robots. Autonomous operation of UAVs is desirable for exploration in unknown environments. In such environments, the UAV must make a map of the environment and simultaneously localize itself in it which is commonly known as the SLAM (simultaneous localization and mapping) problem. This is also required to safely navigate between open spaces, and make informed decisions about the exploration targets. UAVs have physical constraints including limited payload, and are generally equipped with low-spec embedded computational devices and sensors. Therefore, it is often challenging to achieve robust SLAM on UAVs which also affects exploration. In this paper, we present an autonomous exploration of UAVs in completely unknown environments using low cost sensors such as LIDAR and an RGBD camera. A sensor fusion method is proposed to build a dense 3D map of the environment. Multiple images from the scene are geometrically aligned as the UAV explores the environment, and then a frontier exploration technique is used to search for the next target in the mapped area to explore the maximum area possible. The results show that the proposed algorithm can build precise maps even with low-cost sensors, and explore the environment efficiently.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3531 ◽  
Author(s):  
Juan-Carlos Trujillo ◽  
Rodrigo Munguia ◽  
Sarquis Urzua ◽  
Edmundo Guerra ◽  
Antoni Grau

To obtain autonomy in applications that involve Unmanned Aerial Vehicles (UAVs), the capacity of self-location and perception of the operational environment is a fundamental requirement. To this effect, GPS represents the typical solution for determining the position of a UAV operating in outdoor and open environments. On the other hand, GPS cannot be a reliable solution for a different kind of environments like cluttered and indoor ones. In this scenario, a good alternative is represented by the monocular SLAM (Simultaneous Localization and Mapping) methods. A monocular SLAM system allows a UAV to operate in a priori unknown environment using an onboard camera to simultaneously build a map of its surroundings while at the same time locates itself respect to this map. So, given the problem of an aerial robot that must follow a free-moving cooperative target in a GPS denied environment, this work presents a monocular-based SLAM approach for cooperative UAV–Target systems that addresses the state estimation problem of (i) the UAV position and velocity, (ii) the target position and velocity, (iii) the landmarks positions (map). The proposed monocular SLAM system incorporates altitude measurements obtained from an altimeter. In this case, an observability analysis is carried out to show that the observability properties of the system are improved by incorporating altitude measurements. Furthermore, a novel technique to estimate the approximate depth of the new visual landmarks is proposed, which takes advantage of the cooperative target. Additionally, a control system is proposed for maintaining a stable flight formation of the UAV with respect to the target. In this case, the stability of control laws is proved using the Lyapunov theory. The experimental results obtained from real data as well as the results obtained from computer simulations show that the proposed scheme can provide good performance.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 121
Author(s):  
Buğra ŞİMŞEK ◽  
Hasan Şakir BİLGE

Localization and mapping technologies are of great importance for all varieties of Unmanned Aerial Vehicles (UAVs) to perform their operations. In the near future, it is planned to increase the use of micro/nano-size UAVs. Such vehicles are sometimes expendable platforms, and reuse may not be possible. Compact, mounted and low-cost cameras are preferred in these UAVs due to weight, cost and size limitations. Visual simultaneous localization and mapping (vSLAM) methods are used for providing situational awareness of micro/nano-size UAVs. Fast rotational movements that occur during flight with gimbal-free, mounted cameras cause motion blur. Above a certain level of motion blur, tracking losses exist, which causes vSLAM algorithms not to operate effectively. In this study, a novel vSLAM framework is proposed that prevents the occurrence of tracking losses in micro/nano-UAVs due to the motion blur. In the proposed framework, the blur level of the frames obtained from the platform camera is determined and the frames whose focus measure score is below the threshold are restored by specific motion-deblurring methods. The major reasons of tracking losses have been analyzed with experimental studies, and vSLAM algorithms have been made durable by our studied framework. It has been observed that our framework can prevent tracking losses at 5, 10 and 20 fps processing speeds. vSLAM algorithms continue to normal operations at those processing speeds that have not been succeeded before using standard vSLAM algorithms, which can be considered as a superiority of our study.


Sign in / Sign up

Export Citation Format

Share Document