scholarly journals Convolutional Neural Networks for Classification of Drones Using Radars

Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 149
Author(s):  
Divy Raval ◽  
Emily Hunter ◽  
Sinclair Hudson ◽  
Anthony Damini ◽  
Bhashyam Balaji

The ability to classify drones using radar signals is a problem of great interest. In this paper, we apply convolutional neural networks (CNNs) to the Short-Time Fourier Transform (STFT) spectrograms of the simulated radar signals reflected from the drones. The drones vary in many ways that impact the STFT spectrograms, including blade length and blade rotation rates. Some of these physical parameters are captured in the Martin and Mulgrew model which was used to produce the datasets. We examine the data under X-band and W-band radar simulation scenarios and show that a CNN approach leads to an F1 score of 0.816±0.011 when trained on data with a signal-to-noise ratio (SNR) of 10 dB. The neural network which was trained on data from an X-band radar with 2 kHz pulse repetition frequency was shown to perform better than the CNN trained on the aforementioned W-band radar. It remained robust to the drone blade pitch and its performance varied directly in a linear fashion with the SNR.

2019 ◽  
Vol 621 ◽  
pp. A103 ◽  
Author(s):  
J. Bialopetravičius ◽  
D. Narbutis ◽  
V. Vansevičius

Context. Convolutional neural networks (CNNs) have been proven to perform fast classification and detection on natural images and have the potential to infer astrophysical parameters on the exponentially increasing amount of sky-survey imaging data. The inference pipeline can be trained either from real human-annotated data or simulated mock observations. Until now, star cluster analysis was based on integral or individual resolved stellar photometry. This limits the amount of information that can be extracted from cluster images. Aims. We aim to develop a CNN-based algorithm capable of simultaneously deriving ages, masses, and sizes of star clusters directly from multi-band images. We also aim to demonstrate CNN capabilities on low-mass semi-resolved star clusters in a low-signal-to-noise-ratio regime. Methods. A CNN was constructed based on the deep residual network (ResNet) architecture and trained on simulated images of star clusters with various ages, masses, and sizes. To provide realistic backgrounds, M 31 star fields taken from The Panchromatic Hubble Andromeda Treasury (PHAT) survey were added to the mock cluster images. Results. The proposed CNN was verified on mock images of artificial clusters and has demonstrated high precision and no significant bias for clusters of ages ≲3 Gyr and masses between 250 and 4000 M⊙. The pipeline is end-to-end, starting from input images all the way to the inferred parameters; no hand-coded steps have to be performed: estimates of parameters are provided by the neural network in one inferential step from raw images.


2021 ◽  
Vol 11 (10) ◽  
pp. 4440
Author(s):  
Youheng Tan ◽  
Xiaojun Jing

Cooperative spectrum sensing (CSS) is an important topic due to its capacity to solve the issue of the hidden terminal. However, the sensing performance of CSS is still poor, especially in low signal-to-noise ratio (SNR) situations. In this paper, convolutional neural networks (CNN) are considered to extract the features of the observed signal and, as a consequence, improve the sensing performance. More specifically, a novel two-dimensional dataset of the received signal is established and three classical CNN (LeNet, AlexNet and VGG-16)-based CSS schemes are trained and analyzed on the proposed dataset. In addition, sensing performance comparisons are made between the proposed CNN-based CSS schemes and the AND, OR, majority voting-based CSS schemes. The simulation results state that the sensing accuracy of the proposed schemes is greatly improved and the network depth helps with this.


Author(s):  
O.N. Korsun ◽  
V.N. Yurko

We analysed two approaches to estimating the state of a human operator according to video imaging of the face. These approaches, both using deep convolutional neural networks, are as follows: 1) automated emotion recognition; 2) analysis of blinking characteristics. The study involved assessing changes in the functional state of a human operator performing a manual landing in a flight simulator. During this process, flight parameters were recorded, and the operator’s face was filmed. Then we used our custom software to perform automated recognition of emotions (blinking), synchronising the emotions (blinking) recognised to the flight parameters recorded. As a result, we detected persistent patterns linking the operator fatigue level to the number of emotions recognised by the neural network. The type of emotion depends on unique psychological characteristics of the operator. Our experiments allow for easily tracing these links when analysing the emotions of "Sadness", "Fear" and "Anger". The study revealed a correlation between blinking properties and piloting accuracy. A higher piloting accuracy meant more blinks recorded, which may be explained by a stable psycho-physiological state leading to confident piloting


2021 ◽  
Author(s):  
Kianoosh Kazemi ◽  
Juho Laitala ◽  
Iman Azimi ◽  
Pasi Liljeberg ◽  
Amir M. Rahmani

<div>Accurate peak determination from noise-corrupted photoplethysmogram (PPG) signal is the basis for further analysis of physiological quantities such as heart rate and heart rate variability. In the past decades, many methods have been proposed to provide reliable peak detection. These peak detection methods include rule-based algorithms, adaptive thresholds, and signal processing techniques. However, they are designed for noise-free PPG signals and are insufficient for PPG signals with low signal-to-noise ratio (SNR). This paper focuses on enhancing PPG noise-resiliency and proposes a robust peak detection algorithm for noise and motion artifact corrupted PPG signals. Our algorithm is based on Convolutional Neural Networks (CNN) with dilated convolutions. Using dilated convolutions provides a large receptive field, making our CNN model robust at time series processing. In this study, we use a dataset collected from wearable devices in health monitoring under free-living conditions. In addition, a data generator is developed for producing noisy PPG data used for training the network. The method performance is compared against other state-of-the-art methods and tested in SNRs ranging from 0 to 45 dB. Our method obtains better accuracy in all the SNRs, compared with the existing adaptive threshold and transform-based methods. The proposed method shows an overall precision, recall, and F1-score 80%, 80%, and 80% in all the SNR ranges. However, these figures for the other methods are below 78%, 77%, and 77%, respectively. The proposed method proves to be accurate for detecting PPG peaks even in the presence of noise.</div>


2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5831
Author(s):  
Benedikt Adelmann ◽  
Ralf Hellmann

In this contribution, we compare basic neural networks with convolutional neural networks for cut failure classification during fiber laser cutting. The experiments are performed by cutting thin electrical sheets with a 500 W single-mode fiber laser while taking coaxial camera images for the classification. The quality is grouped in the categories good cut, cuts with burr formation and cut interruptions. Indeed, our results reveal that both cut failures can be detected with one system. Independent of the neural network design and size, a minimum classification accuracy of 92.8% is achieved, which could be increased with more complex networks to 95.8%. Thus, convolutional neural networks reveal a slight performance advantage over basic neural networks, which yet is accompanied by a higher calculation time, which nevertheless is still below 2 ms. In a separated examination, cut interruptions can be detected with much higher accuracy as compared to burr formation. Overall, the results reveal the possibility to detect burr formations and cut interruptions during laser cutting simultaneously with high accuracy, as being desirable for industrial applications.


In this paper we will identify a cry signals of infants and the explanation behind the screams below 0-6 months of segment age. Detection of baby cry signals is essential for the pre-processing of various applications involving crial analysis for baby caregivers, such as emotion detection. Since cry signals hold baby well-being information and can be understood to an extent by experienced parents and experts. We train and validate the neural network architecture for baby cry detection and also test the fastAI with the neural network. Trained neural networks will provide a model and this model can predict the reason behind the cry sound. Only the cry sounds are recognized, and alert the user automatically. Created a web application by responding and detecting different emotions including hunger, tired, discomfort, bellypain.


2020 ◽  
Vol 12 (7) ◽  
pp. 1117 ◽  
Author(s):  
Wenyang Duan ◽  
Ke Yang ◽  
Limin Huang ◽  
Xuewen Ma

X-band marine radar is an effective tool for sea wave remote sensing. Conventional physical-based methods for acquiring wave parameters from radar sea clutter images use three-dimensional Fourier transform and spectral analysis. They are limited by some assumptions, empirical formulas and the calibration process while obtaining the modulation transfer function (MTF) and signal-to-noise ratio (SNR). Therefore, further improvement of wave inversion accuracy by using the physical-based method presents a challenge. Inspired by the capability of convolutional neural networks (CNN) in image characteristic processing, a deep-learning inversion method based on deep CNN is proposed. No intermediate step or parameter is needed in the CNN-based method, therefore fewer errors are introduced. Wave parameter inversion models were constructed based on CNN to inverse the wave’s spectral peak period and significant wave height. In the present paper, the numerically simulated X-band radar image data were used for a numerical investigation of wave parameters. Results of the conventional spectral analysis and CNN-based methods were compared and the CNN-based method had a higher accuracy on the same data set. The influence of training strategy on CNN-based inversion models was studied to analyze the dependence of a deep-learning inversion model on training data. Additionally, the effects of target parameters on the inversion accuracy of CNN-based models was also studied.


2021 ◽  
Vol 2086 (1) ◽  
pp. 012148
Author(s):  
P A Khorin ◽  
A P Dzyuba ◽  
P G Serafimovich ◽  
S N Khonina

Abstract Recognition of the types of aberrations corresponding to individual Zernike functions were carried out from the pattern of the intensity of the point spread function (PSF) outside the focal plane using convolutional neural networks. The PSF intensity patterns outside the focal plane are more informative in comparison with the focal plane even for small values/magnitudes of aberrations. The mean prediction errors of the neural network for each type of aberration were obtained for a set of 8 Zernike functions from a dataset of 2 thousand pictures of out-of-focal PSFs. As a result of training, for the considered types of aberrations, the obtained averaged absolute errors do not exceed 0.0053, which corresponds to an almost threefold decrease in the error in comparison with the same result for focal PSFs.


Sign in / Sign up

Export Citation Format

Share Document