scholarly journals Deriving star cluster parameters with convolutional neural networks

2019 ◽  
Vol 621 ◽  
pp. A103 ◽  
Author(s):  
J. Bialopetravičius ◽  
D. Narbutis ◽  
V. Vansevičius

Context. Convolutional neural networks (CNNs) have been proven to perform fast classification and detection on natural images and have the potential to infer astrophysical parameters on the exponentially increasing amount of sky-survey imaging data. The inference pipeline can be trained either from real human-annotated data or simulated mock observations. Until now, star cluster analysis was based on integral or individual resolved stellar photometry. This limits the amount of information that can be extracted from cluster images. Aims. We aim to develop a CNN-based algorithm capable of simultaneously deriving ages, masses, and sizes of star clusters directly from multi-band images. We also aim to demonstrate CNN capabilities on low-mass semi-resolved star clusters in a low-signal-to-noise-ratio regime. Methods. A CNN was constructed based on the deep residual network (ResNet) architecture and trained on simulated images of star clusters with various ages, masses, and sizes. To provide realistic backgrounds, M 31 star fields taken from The Panchromatic Hubble Andromeda Treasury (PHAT) survey were added to the mock cluster images. Results. The proposed CNN was verified on mock images of artificial clusters and has demonstrated high precision and no significant bias for clusters of ages ≲3 Gyr and masses between 250 and 4000 M⊙. The pipeline is end-to-end, starting from input images all the way to the inferred parameters; no hand-coded steps have to be performed: estimates of parameters are provided by the neural network in one inferential step from raw images.

2020 ◽  
Vol 633 ◽  
pp. A148 ◽  
Author(s):  
J. Bialopetravičius ◽  
D. Narbutis

Context. Convolutional neural networks (CNNs) have been established as the go-to method for fast object detection and classification of natural images. This opens the door for astrophysical parameter inference on the exponentially increasing amount of sky survey data. Until now, star cluster analysis was based on integral or resolved stellar photometry, which limit the amount of information that can be extracted from individual pixels of cluster images. Aims. We aim to create a CNN capable of inferring star cluster evolutionary, structural, and environmental parameters from multiband images and to demonstrate its capabilities in discriminating genuine clusters from galactic stellar backgrounds. Methods. A CNN based on the deep residual network (ResNet) architecture was created and trained to infer cluster ages, masses, sizes, and extinctions with respect to the degeneracies between them. Mock clusters placed on M 83 Hubble Space Telescope images utilizing three photometric passbands (F336W, F438W, and F814W) were used. The CNN is also capable of predicting the likelihood of the presence of a cluster in an image and quantifying its visibility (S/N). Results. The CNN was tested on mock images of artificial clusters and has demonstrated reliable inference results for clusters of ages ≲100 Myr, extinctions AV between 0 and 3 mag, masses between 3 × 103 and 3 × 105 M⊙, and sizes between 0.04 and 0.4 arcsec at the distance of the M 83 galaxy. Real M 83 galaxy cluster parameter inference tests were performed with objects taken from previous studies and have demonstrated consistent results.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 149
Author(s):  
Divy Raval ◽  
Emily Hunter ◽  
Sinclair Hudson ◽  
Anthony Damini ◽  
Bhashyam Balaji

The ability to classify drones using radar signals is a problem of great interest. In this paper, we apply convolutional neural networks (CNNs) to the Short-Time Fourier Transform (STFT) spectrograms of the simulated radar signals reflected from the drones. The drones vary in many ways that impact the STFT spectrograms, including blade length and blade rotation rates. Some of these physical parameters are captured in the Martin and Mulgrew model which was used to produce the datasets. We examine the data under X-band and W-band radar simulation scenarios and show that a CNN approach leads to an F1 score of 0.816±0.011 when trained on data with a signal-to-noise ratio (SNR) of 10 dB. The neural network which was trained on data from an X-band radar with 2 kHz pulse repetition frequency was shown to perform better than the CNN trained on the aforementioned W-band radar. It remained robust to the drone blade pitch and its performance varied directly in a linear fashion with the SNR.


2021 ◽  
Vol 11 (10) ◽  
pp. 4440
Author(s):  
Youheng Tan ◽  
Xiaojun Jing

Cooperative spectrum sensing (CSS) is an important topic due to its capacity to solve the issue of the hidden terminal. However, the sensing performance of CSS is still poor, especially in low signal-to-noise ratio (SNR) situations. In this paper, convolutional neural networks (CNN) are considered to extract the features of the observed signal and, as a consequence, improve the sensing performance. More specifically, a novel two-dimensional dataset of the received signal is established and three classical CNN (LeNet, AlexNet and VGG-16)-based CSS schemes are trained and analyzed on the proposed dataset. In addition, sensing performance comparisons are made between the proposed CNN-based CSS schemes and the AND, OR, majority voting-based CSS schemes. The simulation results state that the sensing accuracy of the proposed schemes is greatly improved and the network depth helps with this.


Author(s):  
O.N. Korsun ◽  
V.N. Yurko

We analysed two approaches to estimating the state of a human operator according to video imaging of the face. These approaches, both using deep convolutional neural networks, are as follows: 1) automated emotion recognition; 2) analysis of blinking characteristics. The study involved assessing changes in the functional state of a human operator performing a manual landing in a flight simulator. During this process, flight parameters were recorded, and the operator’s face was filmed. Then we used our custom software to perform automated recognition of emotions (blinking), synchronising the emotions (blinking) recognised to the flight parameters recorded. As a result, we detected persistent patterns linking the operator fatigue level to the number of emotions recognised by the neural network. The type of emotion depends on unique psychological characteristics of the operator. Our experiments allow for easily tracing these links when analysing the emotions of "Sadness", "Fear" and "Anger". The study revealed a correlation between blinking properties and piloting accuracy. A higher piloting accuracy meant more blinks recorded, which may be explained by a stable psycho-physiological state leading to confident piloting


2021 ◽  
Author(s):  
Kianoosh Kazemi ◽  
Juho Laitala ◽  
Iman Azimi ◽  
Pasi Liljeberg ◽  
Amir M. Rahmani

<div>Accurate peak determination from noise-corrupted photoplethysmogram (PPG) signal is the basis for further analysis of physiological quantities such as heart rate and heart rate variability. In the past decades, many methods have been proposed to provide reliable peak detection. These peak detection methods include rule-based algorithms, adaptive thresholds, and signal processing techniques. However, they are designed for noise-free PPG signals and are insufficient for PPG signals with low signal-to-noise ratio (SNR). This paper focuses on enhancing PPG noise-resiliency and proposes a robust peak detection algorithm for noise and motion artifact corrupted PPG signals. Our algorithm is based on Convolutional Neural Networks (CNN) with dilated convolutions. Using dilated convolutions provides a large receptive field, making our CNN model robust at time series processing. In this study, we use a dataset collected from wearable devices in health monitoring under free-living conditions. In addition, a data generator is developed for producing noisy PPG data used for training the network. The method performance is compared against other state-of-the-art methods and tested in SNRs ranging from 0 to 45 dB. Our method obtains better accuracy in all the SNRs, compared with the existing adaptive threshold and transform-based methods. The proposed method shows an overall precision, recall, and F1-score 80%, 80%, and 80% in all the SNR ranges. However, these figures for the other methods are below 78%, 77%, and 77%, respectively. The proposed method proves to be accurate for detecting PPG peaks even in the presence of noise.</div>


2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.


2019 ◽  
Vol 35 (17) ◽  
pp. 3208-3210 ◽  
Author(s):  
Yangzhen Wang ◽  
Feng Su ◽  
Shanshan Wang ◽  
Chaojuan Yang ◽  
Yonglu Tian ◽  
...  

Abstract Motivation Functional imaging at single-neuron resolution offers a highly efficient tool for studying the functional connectomics in the brain. However, mainstream neuron-detection methods focus on either the morphologies or activities of neurons, which may lead to the extraction of incomplete information and which may heavily rely on the experience of the experimenters. Results We developed a convolutional neural networks and fluctuation method-based toolbox (ImageCN) to increase the processing power of calcium imaging data. To evaluate the performance of ImageCN, nine different imaging datasets were recorded from awake mouse brains. ImageCN demonstrated superior neuron-detection performance when compared with other algorithms. Furthermore, ImageCN does not require sophisticated training for users. Availability and implementation ImageCN is implemented in MATLAB. The source code and documentation are available at https://github.com/ZhangChenLab/ImageCN. Supplementary information Supplementary data are available at Bioinformatics online.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5831
Author(s):  
Benedikt Adelmann ◽  
Ralf Hellmann

In this contribution, we compare basic neural networks with convolutional neural networks for cut failure classification during fiber laser cutting. The experiments are performed by cutting thin electrical sheets with a 500 W single-mode fiber laser while taking coaxial camera images for the classification. The quality is grouped in the categories good cut, cuts with burr formation and cut interruptions. Indeed, our results reveal that both cut failures can be detected with one system. Independent of the neural network design and size, a minimum classification accuracy of 92.8% is achieved, which could be increased with more complex networks to 95.8%. Thus, convolutional neural networks reveal a slight performance advantage over basic neural networks, which yet is accompanied by a higher calculation time, which nevertheless is still below 2 ms. In a separated examination, cut interruptions can be detected with much higher accuracy as compared to burr formation. Overall, the results reveal the possibility to detect burr formations and cut interruptions during laser cutting simultaneously with high accuracy, as being desirable for industrial applications.


In this paper we will identify a cry signals of infants and the explanation behind the screams below 0-6 months of segment age. Detection of baby cry signals is essential for the pre-processing of various applications involving crial analysis for baby caregivers, such as emotion detection. Since cry signals hold baby well-being information and can be understood to an extent by experienced parents and experts. We train and validate the neural network architecture for baby cry detection and also test the fastAI with the neural network. Trained neural networks will provide a model and this model can predict the reason behind the cry sound. Only the cry sounds are recognized, and alert the user automatically. Created a web application by responding and detecting different emotions including hunger, tired, discomfort, bellypain.


Sign in / Sign up

Export Citation Format

Share Document