scholarly journals A Framework for Planning and Execution of Drone Swarm Missions in a Hostile Environment

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4150
Author(s):  
Barbara Siemiatkowska ◽  
Wojciech Stecz

This article presents a framework for planning a drone swarm mission in a hostile environment. Elements of the planning framework are discussed in detail, including methods of planning routes for drone swarms using mixed integer linear programming (MILP) and methods of detecting potentially dangerous objects using EO/IR camera images and synthetic aperture radar (SAR). Methods of detecting objects in the field are used in the mission planning process to re-plan the swarm’s flight paths. The route planning model is discussed using the example of drone formations managed by one UAV that communicates through another UAV with the ground control station (GCS). This article presents practical examples of using algorithms for detecting dangerous objects for re-planning of swarm routes. A novelty in the work is the development of these algorithms in such a way that they can be implemented on mobile computers used by UAVs and integrated with MILP tasks. The methods of detection and classification of objects in real time by UAVs equipped with SAR and EO/IR are presented. Different sensors require different methods to detect objects. In the case of infrared or optoelectronic sensors, a convolutional neural network is used. For SAR images, a rule-based system is applied. The experimental results confirm that the stream of images can be analyzed in real-time.

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5712
Author(s):  
Wojciech Stecz ◽  
Krzysztof Gromada

The paper presents the concept of planning the optimal trajectory of fixed-wing unmanned aerial vehicle (UAV) of a short-range tactical class, whose task is to recognize a set of ground objects as a part of a reconnaissance mission. Tasks carried out by such systems are mainly associated with an aerial reconnaissance using Electro-Optical/Infrared (EO/IR) systems and Synthetic Aperture Radars (SARs) to support military operations. Execution of a professional reconnaissance of the indicated objects requires determining the UAV flight trajectory in the close neighborhood of the target, in order to collect as much interesting information as possible. The paper describes the algorithm for determining UAV flight trajectories, which is tasked with identifying the indicated objectives using the sensors specified in the order. The presence of UAV threatening objects is taken into account. The task of determining the UAV flight trajectory for recognition of the target is a component of the planning process of the tactical class UAV mission, which is also presented in the article. The problem of determining the optimal UAV trajectory has been decomposed into several subproblems: determining the reconnaissance flight method in the vicinity of the currently recognized target depending on the sensor used and the required parameters of the recognition product (photo, film, or SAR scan), determining the initial possible flight trajectory that takes into account potential UAV threats, and planning detailed flight trajectory considering the parameters of the air platform based on the maneuver planning algorithm designed for tactical class platforms. UAV route planning algorithms with time constraints imposed on the implementation of individual tasks were used to solve the task of determining UAV flight trajectories. The problem was formulated in the form of a Mixed Integer Linear Problem (MILP) model. For determining the flight path in the neighborhood of the target, the optimal control algorithm was also presented in the form of a MILP model. The determined trajectory is then corrected based on the construction algorithm for determining real UAV flight segments based on Dubin curves.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Song-Quan Ong ◽  
Hamdan Ahmad ◽  
Gomesh Nair ◽  
Pradeep Isawasan ◽  
Abdul Hafiz Ab Majid

AbstractClassification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) by humans remains challenging. We proposed a highly accessible method to develop a deep learning (DL) model and implement the model for mosquito image classification by using hardware that could regulate the development process. In particular, we constructed a dataset with 4120 images of Aedes mosquitoes that were older than 12 days old and had common morphological features that disappeared, and we illustrated how to set up supervised deep convolutional neural networks (DCNNs) with hyperparameter adjustment. The model application was first conducted by deploying the model externally in real time on three different generations of mosquitoes, and the accuracy was compared with human expert performance. Our results showed that both the learning rate and epochs significantly affected the accuracy, and the best-performing hyperparameters achieved an accuracy of more than 98% at classifying mosquitoes, which showed no significant difference from human-level performance. We demonstrated the feasibility of the method to construct a model with the DCNN when deployed externally on mosquitoes in real time.


1991 ◽  
Author(s):  
Wolfgang Poelzleitner ◽  
Gert Schwingskakl

1989 ◽  
Vol 65 (2) ◽  
pp. 143-148
Author(s):  
Noelle Bleuzen-Guernalec
Keyword(s):  

2020 ◽  
Vol 6 (2) ◽  
Author(s):  
Dmitry Amelin ◽  
Ivan Potapov ◽  
Josep Cardona Audí ◽  
Andreas Kogut ◽  
Rüdiger Rupp ◽  
...  

AbstractThis paper reports on the evaluation of recurrent and convolutional neural networks as real-time grasp phase classifiers for future control of neuroprostheses for people with high spinal cord injury. A field-programmable gate array has been chosen as an implementation platform due to its form factor and ability to perform parallel computations, which are specific for the selected neural networks. Three different phases of two grasp patterns and the additional open hand pattern were predicted by means of surface Electromyography (EMG) signals (i.e. Seven classes in total). Across seven healthy subjects, CNN (Convolutional Neural Networks) and RNN (Recurrent Neural Networks) had a mean accuracy of 85.23% with a standard deviation of 4.77% and 112 µs per prediction and 83.30% with a standard deviation of 4.36% and 40 µs per prediction, respectively.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


Sign in / Sign up

Export Citation Format

Share Document