scholarly journals Aerial and Ground Robot Collaboration for Autonomous Mapping in Search and Rescue Missions

Drones ◽  
2020 ◽  
Vol 4 (4) ◽  
pp. 79
Author(s):  
Dimitrios Chatziparaschis ◽  
Michail G. Lagoudakis ◽  
Panagiotis Partsinevelos

Humanitarian Crisis scenarios typically require immediate rescue intervention. In many cases, the conditions at a scene may be prohibitive for human rescuers to provide instant aid, because of hazardous, unexpected, and human threatening situations. These scenarios are ideal for autonomous mobile robot systems to assist in searching and even rescuing individuals. In this study, we present a synchronous ground-aerial robot collaboration approach, under which an Unmanned Aerial Vehicle (UAV) and a humanoid robot solve a Search and Rescue scenario locally, without the aid of a commonly used Global Navigation Satellite System (GNSS). Specifically, the UAV uses a combination of Simultaneous Localization and Mapping and OctoMap approaches to extract a 2.5D occupancy grid map of the unknown area in relation to the humanoid robot. The humanoid robot receives a goal position in the created map and executes a path planning algorithm in order to estimate the FootStep navigation trajectory for reaching the goal. As the humanoid robot navigates, it localizes itself in the map while using an adaptive Monte-Carlo Localization algorithm by combining local odometry data with sensor observations from the UAV. Finally, the humanoid robot performs visual human body detection while using camera data through a Darknet pre-trained neural network. The proposed robot collaboration scheme has been tested under a proof of concept setting in an exterior GNSS-denied environment.

Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3542 ◽  
Author(s):  
Eleftherios Lygouras ◽  
Nicholas Santavas ◽  
Anastasios Taitzoglou ◽  
Konstantinos Tarchanidis ◽  
Athanasios Mitropoulos ◽  
...  

Unmanned aerial vehicles (UAVs) play a primary role in a plethora of technical and scientific fields owing to their wide range of applications. In particular, the provision of emergency services during the occurrence of a crisis event is a vital application domain where such aerial robots can contribute, sending out valuable assistance to both distressed humans and rescue teams. Bearing in mind that time constraints constitute a crucial parameter in search and rescue (SAR) missions, the punctual and precise detection of humans in peril is of paramount importance. The paper in hand deals with real-time human detection onboard a fully autonomous rescue UAV. Using deep learning techniques, the implemented embedded system was capable of detecting open water swimmers. This allowed the UAV to provide assistance accurately in a fully unsupervised manner, thus enhancing first responder operational capabilities. The novelty of the proposed system is the combination of global navigation satellite system (GNSS) techniques and computer vision algorithms for both precise human detection and rescue apparatus release. Details about hardware configuration as well as the system’s performance evaluation are fully discussed.


2017 ◽  
Vol 37 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Robert A Hewitt ◽  
Evangelos Boukas ◽  
Martin Azkarate ◽  
Marco Pagnamenta ◽  
Joshua A Marshall ◽  
...  

This paper describes a dataset collected along a 1 km section of beach near Katwijk, The Netherlands, which was populated with a collection of artificial rocks of varying sizes to emulate known rock size densities at current and potential Mars landing sites. First, a fixed-wing unmanned aerial vehicle collected georeferenced images of the entire area. Then, the beach was traversed by a rocker-bogie-style rover equipped with a suite of sensors that are envisioned for use in future planetary rover missions. These sensors, configured so as to emulate the ExoMars rover, include stereo cameras, and time-of-flight and scanning light-detection-and-ranging sensors. This dataset will be of interest to researchers developing localization and mapping algorithms for vehicles traveling over natural and unstructured terrain in environments that do not have access to the global navigation satellite system, and where only previously taken satellite or aerial imagery is available.


2020 ◽  
Vol 12 (10) ◽  
pp. 1564 ◽  
Author(s):  
Kai-Wei Chiang ◽  
Guang-Je Tsai ◽  
Yu-Hua Li ◽  
You Li ◽  
Naser El-Sheimy

Automated driving has made considerable progress recently. The multisensor fusion system is a game changer in making self-driving cars possible. In the near future, multisensor fusion will be necessary to meet the high accuracy needs of automated driving systems. This paper proposes a multisensor fusion design, including an inertial navigation system (INS), a global navigation satellite system (GNSS), and light detection and ranging (LiDAR), to implement 3D simultaneous localization and mapping (INS/GNSS/3D LiDAR-SLAM). The proposed fusion structure enhances the conventional INS/GNSS/odometer by compensating for individual drawbacks such as INS-drift and error-contaminated GNSS. First, a highly integrated INS-aiding LiDAR-SLAM is presented to improve the performance and increase the robustness to adjust to varied environments using the reliable initial values from the INS. Second, the proposed fault detection exclusion (FDE) contributes SLAM to eliminate the failure solutions such as local solution or the divergence of algorithm. Third, the SLAM position velocity acceleration (PVA) model is used to deal with the high dynamic movement. Finally, an integrity assessment benefits the central fusion filter to avoid failure measurements into the update process based on the information from INS-aiding SLAM, which increases the reliability and accuracy. Consequently, our proposed multisensor design can deal with various situations such as long-term GNSS outage, deep urban areas, and highways. The results show that the proposed method can achieve an accuracy of under 1 meter in challenging scenarios, which has the potential to contribute the autonomous system.


2021 ◽  
Vol 11 (8) ◽  
pp. 3688
Author(s):  
Ali Barzegar ◽  
Oualid Doukhi ◽  
Deok-Jin Lee

In this study, the hardware and software design and implementation of an autonomous electric vehicle are addressed. We aimed to develop an autonomous electric vehicle for path tracking. Control and navigation algorithms are developed and implemented. The vehicle is able to perform path-tracking maneuvers under environments in which the positioning signals from the Global Navigation Satellite System (GNSS) are not accessible. The proposed control approach uses a modified constrained input-output nonlinear model predictive controller (NMPC) for path-tracking control. The proposed localization algorithm used in this study guarantees almost accurate position estimation under GNSS-denied environments. We discuss the procedure for designing the vehicle hardware, electronic drivers, communication architecture, localization algorithm, and controller architecture. The system’s full state is estimated by fusing visual inertial odometry (VIO) measurements with wheel odometry data using an extended Kalman filter (EKF). Simulation and real-time experiments are performed. The obtained results demonstrate that our designed autonomous vehicle is capable of performing path-tracking maneuvers without using Global Navigation Satellite System positioning data. The designed vehicle can perform challenging path-tracking maneuvers with a speed of up to 1 m per second.


Author(s):  
M. S. Müller ◽  
S. Urban ◽  
B. Jutzi

The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.


Robotics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 97
Author(s):  
André Silva Aguiar ◽  
Filipe Neves dos Santos ◽  
José Boaventura Cunha ◽  
Héber Sobreira ◽  
Armando Jorge Sousa

Research and development of autonomous mobile robotic solutions that can perform several active agricultural tasks (pruning, harvesting, mowing) have been growing. Robots are now used for a variety of tasks such as planting, harvesting, environmental monitoring, supply of water and nutrients, and others. To do so, robots need to be able to perform online localization and, if desired, mapping. The most used approach for localization in agricultural applications is based in standalone Global Navigation Satellite System-based systems. However, in many agricultural and forest environments, satellite signals are unavailable or inaccurate, which leads to the need of advanced solutions independent from these signals. Approaches like simultaneous localization and mapping and visual odometry are the most promising solutions to increase localization reliability and availability. This work leads to the main conclusion that, few methods can achieve simultaneously the desired goals of scalability, availability, and accuracy, due to the challenges imposed by these harsh environments. In the near future, novel contributions to this field are expected that will help one to achieve the desired goals, with the development of more advanced techniques, based on 3D localization, and semantic and topological mapping. In this context, this work proposes an analysis of the current state-of-the-art of localization and mapping approaches in agriculture and forest environments. Additionally, an overview about the available datasets to develop and test these approaches is performed. Finally, a critical analysis of this research field is done, with the characterization of the literature using a variety of metrics.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3668 ◽  
Author(s):  
Jingren Wen ◽  
Chuang Qian ◽  
Jian Tang ◽  
Hui Liu ◽  
Wenfang Ye ◽  
...  

Simultaneous localization and mapping (SLAM) has been investigated in the field of robotics for two decades, as it is considered to be an effective method for solving the positioning and mapping problem in a single framework. In the SLAM community, the Extended Kalman Filter (EKF) based SLAM and particle filter SLAM are the most mature technologies. After years of development, graph-based SLAM is becoming the most promising technology and a lot of progress has been made recently with respect to accuracy and efficiency. No matter which SLAM method is used, loop closure is a vital part for overcoming the accumulated errors. However, in 2D Light Detection and Ranging (LiDAR) SLAM, on one hand, it is relatively difficult to extract distinctive features in LiDAR scans for loop closure detection, as 2D LiDAR scans encode much less information than images; on the other hand, there is also some special mapping scenery, where no loop closure exists. Thereby, in this paper, instead of loop closure detection, we first propose the method to introduce extra control network constraint (CNC) to the back-end optimization of graph-based SLAM, by aligning the LiDAR scan center with the control vertex of the presurveyed control network to optimize all the poses of scans and submaps. Field tests were carried out in a typical urban Global Navigation Satellite System (GNSS) weak outdoor area. The results prove that the position Root Mean Square (RMS) error of the selected key points is 0.3614 m, evaluated with a reference map produced by Terrestrial Laser Scanner (TLS). Mapping accuracy is significantly improved, compared to the mapping RMS of 1.6462 m without control network constraint. Adding distance constraints of the control network to the back-end optimization is an effective and practical method to solve the drift accumulation of LiDAR front-end scan matching.


2020 ◽  
Vol 12 (19) ◽  
pp. 3185
Author(s):  
Ehsan Khoramshahi ◽  
Raquel A. Oliveira ◽  
Niko Koivumäki ◽  
Eija Honkavaara

Simultaneous localization and mapping (SLAM) of a monocular projective camera installed on an unmanned aerial vehicle (UAV) is a challenging task in photogrammetry, computer vision, and robotics. This paper presents a novel real-time monocular SLAM solution for UAV applications. It is based on two steps: consecutive construction of the UAV path, and adjacent strip connection. Consecutive construction rapidly estimates the UAV path by sequentially connecting incoming images to a network of connected images. A multilevel pyramid matching is proposed for this step that contains a sub-window matching using high-resolution images. The sub-window matching increases the frequency of tie points by propagating locations of matched sub-windows that leads to a list of high-frequency tie points while keeping the execution time relatively low. A sparse bundle block adjustment (BBA) is employed to optimize the initial path by considering nuisance parameters. System calibration parameters with respect to global navigation satellite system (GNSS) and inertial navigation system (INS) are optionally considered in the BBA model for direct georeferencing. Ground control points and checkpoints are optionally included in the model for georeferencing and quality control. Adjacent strip connection is enabled by an overlap analysis to further improve connectivity of local networks. A novel angular parametrization based on spherical rotation coordinate system is presented to address the gimbal lock singularity of BBA. Our results suggest that the proposed scheme is a precise real-time monocular SLAM solution for a UAV.


Author(s):  
G. J. Tsai ◽  
K. W. Chiang ◽  
N. El-Sheimy

<p><strong>Abstract.</strong> With advances in computing and sensor technologies, onboard systems can deal with a large amount of data and achieve real-time process continuously and accurately. In order to further enhance the performance of positioning, high definition map (HD map) is one of the game changers for future autonomous driving. Instead of directly using Inertial Navigation System and Global Navigation Satellite System (INS/GNSS) navigation solutions to conduct the Direct Geo-referencing (DG) and acquiring 3D mapping information, Simultaneous Localization and Mapping (SLAM) relies heavily on environmental features to derive the position and attitude as well as conducting the mapping at the same time. In this research, the new structure is proposed to integrate the INS/GNSS into LiDAR Odometry and Mapping (LOAM) algorithm and enhance the mapping performance. The first contribution is using the INS/GNSS to provide the short-term relative position information for the mapping process when the LiDAR odometry process is failed. The checking process is built to detect the divergence of LiDAR odometry process based on the residual from correspondences of features and innovation sequence of INS/GNSS. More importantly, by integrating with INS/GNSS, the whole global map is located in the standard global coordinate system (WGS84) which can be shared and employed easily and seamlessly. In this research, the designed land vehicle platform includes commercial INS/GNSS integrated product as a reference, relatively low-cost and lower grade INS system and Velodyne LiDAR with 16 laser channels, respectively. The field test is conducted from outdoor to the indoor underground parking lot and the final solution using the proposed method has a significant improvement as well as building a more accurate and reliable map for future use.</p>


2018 ◽  
Vol 940 (10) ◽  
pp. 2-6
Author(s):  
J.A. Younes ◽  
M.G. Mustafin

The issue of calculating the plane rectangular coordinates using the data obtained by the satellite observations during the creation of the geodetic networks is discussed in the article. The peculiarity of these works is in conversion of the coordinates into the Mercator projection, while the plane coordinate system on the base of Gauss-Kruger projection is used in Russia. When using the technology of global navigation satellite system, this task is relevant for any point (area) of the Earth due to a fundamentally different approach in determining the coordinates. The fact is that satellite determinations are much more precise than the ground coordination methods (triangulation and others). In addition, the conversion to the zonal coordinate system is associated with errors; the value at present can prove to be completely critical. The expediency of using the Mercator projection in the topographic and geodetic works production at low latitudes is shown numerically on the basis of model calculations. To convert the coordinates from the geocentric system with the Mercator projection, a programming algorithm which is widely used in Russia was chosen. For its application under low-latitude conditions, the modification of known formulas to be used in Saudi Arabia is implemented.


Sign in / Sign up

Export Citation Format

Share Document