scholarly journals Acoustic-Based UAV Detection Using Late Fusion of Deep Neural Networks

Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 54
Author(s):  
Pietro Casabianca ◽  
Yu Zhang

Multirotor UAVs have become ubiquitous in commercial and public use. As they become more affordable and more available, the associated security risks further increase, especially in relation to airspace breaches and the danger of drone-to-aircraft collisions. Thus, robust systems must be set in place to detect and deal with hostile drones. This paper investigates the use of deep learning methods to detect UAVs using acoustic signals. Deep neural network models are trained with mel-spectrograms as inputs. In this case, Convolutional Neural Networks (CNNs) are shown to be the better performing network, compared with Recurrent Neural Networks (RNNs) and Convolutional Recurrent Neural Networks (CRNNs). Furthermore, late fusion methods have been evaluated using an ensemble of deep neural networks, where the weighted soft voting mechanism has achieved the highest average accuracy of 94.7%, which has outperformed the solo models. In future work, the developed late fusion technique could be utilized with radar and visual methods to further improve the UAV detection performance.

Author(s):  
Makhamisa Senekane ◽  
Mhlambululi Mafu ◽  
Molibeli Benedict Taele

Weather variations play a significant role in peoples’ short-term, medium-term or long-term planning. Therefore, understanding of weather patterns has become very important in decision making. Short-term weather forecasting (nowcasting) involves the prediction of weather over a short period of time; typically few hours. Different techniques have been proposed for short-term weather forecasting. Traditional techniques used for nowcasting are highly parametric, and hence complex. Recently, there has been a shift towards the use of artificial intelligence techniques for weather nowcasting. These include the use of machine learning techniques such as artificial neural networks. In this chapter, we report the use of deep learning techniques for weather nowcasting. Deep learning techniques were tested on meteorological data. Three deep learning techniques, namely multilayer perceptron, Elman recurrent neural networks and Jordan recurrent neural networks, were used in this work. Multilayer perceptron models achieved 91 and 75% accuracies for sunshine forecasting and precipitation forecasting respectively, Elman recurrent neural network models achieved accuracies of 96 and 97% for sunshine and precipitation forecasting respectively, while Jordan recurrent neural network models achieved accuracies of 97 and 97% for sunshine and precipitation nowcasting respectively. The results obtained underline the utility of using deep learning for weather nowcasting.


2020 ◽  
Vol 61 (11) ◽  
pp. 1967-1973
Author(s):  
Takashi Akagi ◽  
Masanori Onishi ◽  
Kanae Masuda ◽  
Ryohei Kuroki ◽  
Kohei Baba ◽  
...  

Abstract Recent rapid progress in deep neural network techniques has allowed recognition and classification of various objects, often exceeding the performance of the human eye. In plant biology and crop sciences, some deep neural network frameworks have been applied mainly for effective and rapid phenotyping. In this study, beyond simple optimizations of phenotyping, we propose an application of deep neural networks to make an image-based internal disorder diagnosis that is hard even for experts, and to visualize the reasons behind each diagnosis to provide biological interpretations. Here, we exemplified classification of calyx-end cracking in persimmon fruit by using five convolutional neural network models with various layer structures and examined potential analytical options involved in the diagnostic qualities. With 3,173 visible RGB images from the fruit apex side, the neural networks successfully made the binary classification of each degree of disorder, with up to 90% accuracy. Furthermore, feature visualizations, such as Grad-CAM and LRP, visualize the regions of the image that contribute to the diagnosis. They suggest that specific patterns of color unevenness, such as in the fruit peripheral area, can be indexes of calyx-end cracking. These results not only provided novel insights into indexes of fruit internal disorders but also proposed the potential applicability of deep neural networks in plant biology.


Biosensors ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 188
Author(s):  
Li-Ren Yeh ◽  
Wei-Chin Chen ◽  
Hua-Yan Chan ◽  
Nan-Han Lu ◽  
Chi-Yuan Wang ◽  
...  

Anesthesia assessment is most important during surgery. Anesthesiologists use electrocardiogram (ECG) signals to assess the patient’s condition and give appropriate medications. However, it is not easy to interpret the ECG signals. Even physicians with more than 10 years of clinical experience may still misjudge. Therefore, this study uses convolutional neural networks to classify ECG image types to assist in anesthesia assessment. The research uses Internet of Things (IoT) technology to develop ECG signal measurement prototypes. At the same time, it classifies signal types through deep neural networks, divided into QRS widening, sinus rhythm, ST depression, and ST elevation. Three models, ResNet, AlexNet, and SqueezeNet, are developed with 50% of the training set and test set. Finally, the accuracy and kappa statistics of ResNet, AlexNet, and SqueezeNet in ECG waveform classification were (0.97, 0.96), (0.96, 0.95), and (0.75, 0.67), respectively. This research shows that it is feasible to measure ECG in real time through IoT and then distinguish four types through deep neural network models. In the future, more types of ECG images will be added, which can improve the real-time classification practicality of the deep model.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Xin Long ◽  
XiangRong Zeng ◽  
Zongcheng Ben ◽  
Dianle Zhou ◽  
Maojun Zhang

The increase in sophistication of neural network models in recent years has exponentially expanded memory consumption and computational cost, thereby hindering their applications on ASIC, FPGA, and other mobile devices. Therefore, compressing and accelerating the neural networks are necessary. In this study, we introduce a novel strategy to train low-bit networks with weights and activations quantized by several bits and address two corresponding fundamental issues. One is to approximate activations through low-bit discretization for decreasing network computational cost and dot-product memory. The other is to specify weight quantization and update mechanism for discrete weights to avoid gradient mismatch. With quantized low-bit weights and activations, the costly full-precision operation will be replaced by shift operation. We evaluate the proposed method on common datasets, and results show that this method can dramatically compress the neural network with slight accuracy loss.


2021 ◽  
Vol 3 (3) ◽  
pp. 662-671
Author(s):  
Jonas Herskind Sejr ◽  
Peter Schneider-Kamp ◽  
Naeem Ayoub

Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It remains, for example, unclear whether an object is detected based on what it looks like or based on the context in which it is located. We have developed an algorithm, Surrogate Object Detection Explainer (SODEx), that can explain any object detection algorithm using any classification explainer. We evaluate SODEx qualitatively and quantitatively by detecting objects in the COCO dataset with YOLOv4 and explaining these detections with LIME. This empirical evaluation does not only demonstrate the value of explainable object detection, it also provides valuable insights into how YOLOv4 detects objects.


Author(s):  
Luis Oala ◽  
Cosmas Heiß ◽  
Jan Macdonald ◽  
Maximilian März ◽  
Gitta Kutyniok ◽  
...  

Abstract Purpose The quantitative detection of failure modes is important for making deep neural networks reliable and usable at scale. We consider three examples for common failure modes in image reconstruction and demonstrate the potential of uncertainty quantification as a fine-grained alarm system. Methods We propose a deterministic, modular and lightweight approach called Interval Neural Network (INN) that produces fast and easy to interpret uncertainty scores for deep neural networks. Importantly, INNs can be constructed post hoc for already trained prediction networks. We compare it against state-of-the-art baseline methods (MCDrop, ProbOut). Results We demonstrate on controlled, synthetic inverse problems the capacity of INNs to capture uncertainty due to noise as well as directional error information. On a real-world inverse problem with human CT scans, we can show that INNs produce uncertainty scores which improve the detection of all considered failure modes compared to the baseline methods. Conclusion Interval Neural Networks offer a promising tool to expose weaknesses of deep image reconstruction models and ultimately make them more reliable. The fact that they can be applied post hoc to equip already trained deep neural network models with uncertainty scores makes them particularly interesting for deployment.


2016 ◽  
Author(s):  
H. Francis Song ◽  
Guangyu R. Yang ◽  
Xiao-Jing Wang

AbstractTrained neural network models, which exhibit many features observed in neural recordings from behaving animals and whose activity and connectivity can be fully analyzed, may provide insights into neural mechanisms. In contrast to commonly used methods for supervised learning from graded error signals, however, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when the optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we describe reward-based training of recurrent neural networks in which a value network guides learning by using the selected actions and activity of the policy network to predict future reward. We show that such models capture both behavioral and electrophysiological findings from well-known experimental paradigms. Our results provide a unified framework for investigating diverse cognitive and value-based computations, including a role for value representation that is essential for learning, but not executing, a task.


2017 ◽  
Vol 40 ◽  
Author(s):  
Steven S. Hansen ◽  
Andrew K. Lampinen ◽  
Gaurav Suri ◽  
James L. McClelland

AbstractLake et al. propose that people rely on “start-up software,” “causal models,” and “intuitive theories” built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.


Information ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 589 ◽  
Author(s):  
Aleksandr Sergeevich Romanov ◽  
Anna Vladimirovna Kurtukova ◽  
Artem Alexandrovich Sobolev ◽  
Alexander Alexandrovich Shelupanov ◽  
Anastasia Mikhailovna Fedotova

This paper is devoted to solving the problem of determining the age of the author of the text based on models of deep neural networks. The article presents an analysis of methods for determining the age of the author of a text and approaches to determining the age of a user by a photo. This could be a solution to the problem of inaccurate data for training by filtering out incorrect user-specified age data. A detailed description of the author’s technique based on deep neural network models and the interpretation of the results is also presented. The study found that the proposed technique achieved 82% accuracy in determining the age of the author from Russian-language text, which makes it competitive in comparison with approaches for other languages.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
H Francis Song ◽  
Guangyu R Yang ◽  
Xiao-Jing Wang

Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.


Sign in / Sign up

Export Citation Format

Share Document