scholarly journals Convolutional Neural Networks for Classification of Drones Using Radars

Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 149
Author(s):  
Divy Raval ◽  
Emily Hunter ◽  
Sinclair Hudson ◽  
Anthony Damini ◽  
Bhashyam Balaji

The ability to classify drones using radar signals is a problem of great interest. In this paper, we apply convolutional neural networks (CNNs) to the Short-Time Fourier Transform (STFT) spectrograms of the simulated radar signals reflected from the drones. The drones vary in many ways that impact the STFT spectrograms, including blade length and blade rotation rates. Some of these physical parameters are captured in the Martin and Mulgrew model which was used to produce the datasets. We examine the data under X-band and W-band radar simulation scenarios and show that a CNN approach leads to an F1 score of 0.816±0.011 when trained on data with a signal-to-noise ratio (SNR) of 10 dB. The neural network which was trained on data from an X-band radar with 2 kHz pulse repetition frequency was shown to perform better than the CNN trained on the aforementioned W-band radar. It remained robust to the drone blade pitch and its performance varied directly in a linear fashion with the SNR.


2019 ◽  
Vol 621 ◽  
pp. A103 ◽  
Author(s):  
J. Bialopetravičius ◽  
D. Narbutis ◽  
V. Vansevičius

Context. Convolutional neural networks (CNNs) have been proven to perform fast classification and detection on natural images and have the potential to infer astrophysical parameters on the exponentially increasing amount of sky-survey imaging data. The inference pipeline can be trained either from real human-annotated data or simulated mock observations. Until now, star cluster analysis was based on integral or individual resolved stellar photometry. This limits the amount of information that can be extracted from cluster images. Aims. We aim to develop a CNN-based algorithm capable of simultaneously deriving ages, masses, and sizes of star clusters directly from multi-band images. We also aim to demonstrate CNN capabilities on low-mass semi-resolved star clusters in a low-signal-to-noise-ratio regime. Methods. A CNN was constructed based on the deep residual network (ResNet) architecture and trained on simulated images of star clusters with various ages, masses, and sizes. To provide realistic backgrounds, M 31 star fields taken from The Panchromatic Hubble Andromeda Treasury (PHAT) survey were added to the mock cluster images. Results. The proposed CNN was verified on mock images of artificial clusters and has demonstrated high precision and no significant bias for clusters of ages ≲3 Gyr and masses between 250 and 4000 M⊙. The pipeline is end-to-end, starting from input images all the way to the inferred parameters; no hand-coded steps have to be performed: estimates of parameters are provided by the neural network in one inferential step from raw images.



2021 ◽  
Vol 11 (10) ◽  
pp. 4440
Author(s):  
Youheng Tan ◽  
Xiaojun Jing

Cooperative spectrum sensing (CSS) is an important topic due to its capacity to solve the issue of the hidden terminal. However, the sensing performance of CSS is still poor, especially in low signal-to-noise ratio (SNR) situations. In this paper, convolutional neural networks (CNN) are considered to extract the features of the observed signal and, as a consequence, improve the sensing performance. More specifically, a novel two-dimensional dataset of the received signal is established and three classical CNN (LeNet, AlexNet and VGG-16)-based CSS schemes are trained and analyzed on the proposed dataset. In addition, sensing performance comparisons are made between the proposed CNN-based CSS schemes and the AND, OR, majority voting-based CSS schemes. The simulation results state that the sensing accuracy of the proposed schemes is greatly improved and the network depth helps with this.



Author(s):  
O.N. Korsun ◽  
V.N. Yurko

We analysed two approaches to estimating the state of a human operator according to video imaging of the face. These approaches, both using deep convolutional neural networks, are as follows: 1) automated emotion recognition; 2) analysis of blinking characteristics. The study involved assessing changes in the functional state of a human operator performing a manual landing in a flight simulator. During this process, flight parameters were recorded, and the operator’s face was filmed. Then we used our custom software to perform automated recognition of emotions (blinking), synchronising the emotions (blinking) recognised to the flight parameters recorded. As a result, we detected persistent patterns linking the operator fatigue level to the number of emotions recognised by the neural network. The type of emotion depends on unique psychological characteristics of the operator. Our experiments allow for easily tracing these links when analysing the emotions of "Sadness", "Fear" and "Anger". The study revealed a correlation between blinking properties and piloting accuracy. A higher piloting accuracy meant more blinks recorded, which may be explained by a stable psycho-physiological state leading to confident piloting



2021 ◽  
Author(s):  
Kianoosh Kazemi ◽  
Juho Laitala ◽  
Iman Azimi ◽  
Pasi Liljeberg ◽  
Amir M. Rahmani

<div>Accurate peak determination from noise-corrupted photoplethysmogram (PPG) signal is the basis for further analysis of physiological quantities such as heart rate and heart rate variability. In the past decades, many methods have been proposed to provide reliable peak detection. These peak detection methods include rule-based algorithms, adaptive thresholds, and signal processing techniques. However, they are designed for noise-free PPG signals and are insufficient for PPG signals with low signal-to-noise ratio (SNR). This paper focuses on enhancing PPG noise-resiliency and proposes a robust peak detection algorithm for noise and motion artifact corrupted PPG signals. Our algorithm is based on Convolutional Neural Networks (CNN) with dilated convolutions. Using dilated convolutions provides a large receptive field, making our CNN model robust at time series processing. In this study, we use a dataset collected from wearable devices in health monitoring under free-living conditions. In addition, a data generator is developed for producing noisy PPG data used for training the network. The method performance is compared against other state-of-the-art methods and tested in SNRs ranging from 0 to 45 dB. Our method obtains better accuracy in all the SNRs, compared with the existing adaptive threshold and transform-based methods. The proposed method shows an overall precision, recall, and F1-score 80%, 80%, and 80% in all the SNR ranges. However, these figures for the other methods are below 78%, 77%, and 77%, respectively. The proposed method proves to be accurate for detecting PPG peaks even in the presence of noise.</div>





2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.



Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5831
Author(s):  
Benedikt Adelmann ◽  
Ralf Hellmann

In this contribution, we compare basic neural networks with convolutional neural networks for cut failure classification during fiber laser cutting. The experiments are performed by cutting thin electrical sheets with a 500 W single-mode fiber laser while taking coaxial camera images for the classification. The quality is grouped in the categories good cut, cuts with burr formation and cut interruptions. Indeed, our results reveal that both cut failures can be detected with one system. Independent of the neural network design and size, a minimum classification accuracy of 92.8% is achieved, which could be increased with more complex networks to 95.8%. Thus, convolutional neural networks reveal a slight performance advantage over basic neural networks, which yet is accompanied by a higher calculation time, which nevertheless is still below 2 ms. In a separated examination, cut interruptions can be detected with much higher accuracy as compared to burr formation. Overall, the results reveal the possibility to detect burr formations and cut interruptions during laser cutting simultaneously with high accuracy, as being desirable for industrial applications.



In this paper we will identify a cry signals of infants and the explanation behind the screams below 0-6 months of segment age. Detection of baby cry signals is essential for the pre-processing of various applications involving crial analysis for baby caregivers, such as emotion detection. Since cry signals hold baby well-being information and can be understood to an extent by experienced parents and experts. We train and validate the neural network architecture for baby cry detection and also test the fastAI with the neural network. Trained neural networks will provide a model and this model can predict the reason behind the cry sound. Only the cry sounds are recognized, and alert the user automatically. Created a web application by responding and detecting different emotions including hunger, tired, discomfort, bellypain.



Author(s):  
M. Madadikhaljan ◽  
R. Bahmanyar ◽  
S. M. Azimi ◽  
P. Reinartz ◽  
U. Sörgel

Abstract. Haze contains floating particles in the air which can result in image quality degradation and visibility reduction in airborne data. Haze removal task has several applications in image enhancement and can improve the performance of automatic image analysis systems, namely object detection and segmentation. Unlike rich haze removal literature in ground imagery, there is a lack of methods specifically designed for aerial imagery, considering the fact that there is a characteristic difference between the aerial imagery domain and ground one. In this paper, we propose a method to dehaze aerial images using Convolutional Neural Networks (CNNs). Currently, there is no available data for dehazing methods in aerial imagery. To address this issue, we have created a syntheticallyhazed aerial image dataset to train the neural network on aerial hazy image dataset. We train All-in-One dehazing network (AODNet) as the base approach on hazy aerial images and compare the performance of our proposed approach against the classical model. We have tested our model on natural as well as the synthetically-hazed aerial images. Both qualitative and quantitative results of the adapted network show an improvement in dehazing results. We show that the adapted AOD-Net on our aerial image test set increases PSNR and SSim by 2.2% and 9%, respectively.



Author(s):  
Md Gouse Pasha

Accidents are now increasingly increasing as more cases are caused by driver drowsiness. To reduce these situations we were working on something that could reduce numbers and get accidents early. Seeing a drowsy driver behind the steering wheel once and warning him could reduce road accidents. In this case drowsiness is detected using an automatic camera, where, based on the captured image, the neural network detects whether the driver is awake or tired. Convolutional Neural Network Technology (CNN) has been used as part of a neural network, where each framework is examined separately and the average of the last 20 frames are tested, corresponding for about one second to a set of training and test data. We analyse image segmentation methods, construct a model based on convolutional neural networks. Using a detailed database of more than 2000 image fragments we are training and analysing the segmentation network to extract the emotional state of the driver in images.



Sign in / Sign up

Export Citation Format

Share Document