Training and evaluation of a learning-based autonomous unmanned aircraft for collision avoidance: Virtual training data generation

Author(s):  
T Matsumoto ◽  
L Vismari ◽  
J Camargo
2021 ◽  
Vol 11 (7) ◽  
pp. 3103
Author(s):  
Kyuman Lee ◽  
Daegyun Choi ◽  
Donghoon Kim

Collision avoidance (CA) using the artificial potential field (APF) usually faces several known issues such as local minima and dynamically infeasible problems, so unmanned aerial vehicles’ (UAVs) paths planned based on the APF are safe only in a certain environment. This research proposes a CA approach that combines the APF and motion primitives (MPs) to tackle the known problems associated with the APF. Since MPs solve for a locally optimal trajectory with respect to allocated time, the trajectory obtained by the MPs is verified as dynamically feasible. When a collision checker based on the k-d tree search algorithm detects collision risk on extracted sample points from the planned trajectory, generating re-planned path candidates to avoid obstacles is performed. After rejecting unsafe route candidates, one applies the APF to select the best route among the remaining safe-path candidates. To validate the proposed approach, we simulated two meaningful scenario cases—the presence of static obstacles situation with local minima and dynamic environments with multiple UAVs present. The simulation results show that the proposed approach provides smooth, efficient, and dynamically feasible pathing compared to the APF.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Author(s):  
Casey L. Smith ◽  
R. Conrad Rorie ◽  
Kevin J. Monk ◽  
Jillian Keeler ◽  
Garrett G. Sadler

Unmanned aircraft systems (UAS) must comply with specific standards to operate in the National Airspace System (NAS). Among the requirements are the detect and avoid (DAA) capabilities, which include display, alerting, and guidance specifications. Previous studies have queried pilots for their subjective feedback of these display elements on earlier systems; the present study sought pilot evaluations with an initial iteration of the unmanned variant of a Next Generation Airborne Collision Avoidance System (ACAS XU). Sixteen participants piloted simulated aircraft with both standalone and integrated DAA displays. Their opinions were gathered using post-block and post-simulation questionnaires as well as guided debriefs. The data showed pilots had better understanding and comfort with the system when using an integrated display. Pilots also rated ACAS XU alerting and guidance as generally acceptable and effective. Implications for further development of ACAS XU and DAA displays are discussed.


2020 ◽  
Vol 19 (6) ◽  
pp. 1-26
Author(s):  
Luke Hsiao ◽  
Sen Wu ◽  
Nicholas Chiang ◽  
Christopher Ré ◽  
Philip Levis

Author(s):  
Logan Cannan ◽  
Brian M. Robinson ◽  
Kathryn Patterson ◽  
Darrell Langford ◽  
Robert Diltz ◽  
...  

2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878633 ◽  
Author(s):  
Mario Monteiro Marques ◽  
Victor Lobo ◽  
R Batista ◽  
J Oliveira ◽  
A Pedro Aguiar ◽  
...  

Unmanned air systems are becoming ever more important in modern societies but raise a number of unresolved problems. There are legal issues with the operation of these vehicles in nonsegregated airspace, and a pressing requirement to solve these issues is the development and testing of reliable and safe mechanisms to avoid collision in flight. In this article, we describe a sense and avoid subsystem developed for a maritime patrol unmanned air system. The article starts with a description of the unmanned air system, that was developed specifically for maritime patrol operations, and proceeds with a discussion of possible ways to guarantee that the unmanned air system does not collide with other flying objects. In the system developed, the position of the unmanned air system is obtained by the global positioning system and that of other flying objects is reported via a data link with a ground control station. This assumes that the detection of those flying objects is done by a radar in the ground or by self-reporting via a traffic monitoring system (such as automatic identification system). The algorithm developed is based on game theory. The approach is to handle both the procedures, threat detection phase and collision avoidance maneuver, in a unified fashion, where the optimal command for each possible relative attitude of the obstacle is computed off-line, therefore requiring low processing power for real-time operation. This work was done under the research project named SEAGULL that aims to improve maritime situational awareness using fleets of unmanned air system, where collision avoidance becomes a major concern.


Author(s):  
A. Schlichting ◽  
C. Brenner

LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. <br><br> For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.


Sign in / Sign up

Export Citation Format

Share Document