scholarly journals Identification of new M31 star cluster candidates from PAndAS images using convolutional neural networks

Author(s):  
S. Wang ◽  
B. Chen ◽  
J. Ma ◽  
Q. Long ◽  
H. Yuan ◽  
...  
2019 ◽  
Vol 621 ◽  
pp. A103 ◽  
Author(s):  
J. Bialopetravičius ◽  
D. Narbutis ◽  
V. Vansevičius

Context. Convolutional neural networks (CNNs) have been proven to perform fast classification and detection on natural images and have the potential to infer astrophysical parameters on the exponentially increasing amount of sky-survey imaging data. The inference pipeline can be trained either from real human-annotated data or simulated mock observations. Until now, star cluster analysis was based on integral or individual resolved stellar photometry. This limits the amount of information that can be extracted from cluster images. Aims. We aim to develop a CNN-based algorithm capable of simultaneously deriving ages, masses, and sizes of star clusters directly from multi-band images. We also aim to demonstrate CNN capabilities on low-mass semi-resolved star clusters in a low-signal-to-noise-ratio regime. Methods. A CNN was constructed based on the deep residual network (ResNet) architecture and trained on simulated images of star clusters with various ages, masses, and sizes. To provide realistic backgrounds, M 31 star fields taken from The Panchromatic Hubble Andromeda Treasury (PHAT) survey were added to the mock cluster images. Results. The proposed CNN was verified on mock images of artificial clusters and has demonstrated high precision and no significant bias for clusters of ages ≲3 Gyr and masses between 250 and 4000 M⊙. The pipeline is end-to-end, starting from input images all the way to the inferred parameters; no hand-coded steps have to be performed: estimates of parameters are provided by the neural network in one inferential step from raw images.


2020 ◽  
Vol 633 ◽  
pp. A148 ◽  
Author(s):  
J. Bialopetravičius ◽  
D. Narbutis

Context. Convolutional neural networks (CNNs) have been established as the go-to method for fast object detection and classification of natural images. This opens the door for astrophysical parameter inference on the exponentially increasing amount of sky survey data. Until now, star cluster analysis was based on integral or resolved stellar photometry, which limit the amount of information that can be extracted from individual pixels of cluster images. Aims. We aim to create a CNN capable of inferring star cluster evolutionary, structural, and environmental parameters from multiband images and to demonstrate its capabilities in discriminating genuine clusters from galactic stellar backgrounds. Methods. A CNN based on the deep residual network (ResNet) architecture was created and trained to infer cluster ages, masses, sizes, and extinctions with respect to the degeneracies between them. Mock clusters placed on M 83 Hubble Space Telescope images utilizing three photometric passbands (F336W, F438W, and F814W) were used. The CNN is also capable of predicting the likelihood of the presence of a cluster in an image and quantifying its visibility (S/N). Results. The CNN was tested on mock images of artificial clusters and has demonstrated reliable inference results for clusters of ages ≲100 Myr, extinctions AV between 0 and 3 mag, masses between 3 × 103 and 3 × 105 M⊙, and sizes between 0.04 and 0.4 arcsec at the distance of the M 83 galaxy. Real M 83 galaxy cluster parameter inference tests were performed with objects taken from previous studies and have demonstrated consistent results.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Author(s):  
Edgar Medina ◽  
Roberto Campos ◽  
Jose Gabriel R. C. Gomes ◽  
Mariane R. Petraglia ◽  
Antonio Petraglia

Sign in / Sign up

Export Citation Format

Share Document