scholarly journals Deep learning for strong lensing search: tests of the convolutional neural networks and new candidates from KiDS DR3

2020 ◽  
Vol 497 (1) ◽  
pp. 556-571
Author(s):  
Zizhao He ◽  
Xinzhong Er ◽  
Qian Long ◽  
Dezi Liu ◽  
Xiangkun Liu ◽  
...  

ABSTRACT Convolutional neural networks have been successfully applied in searching for strong lensing systems, leading to discoveries of new candidates from large surveys. On the other hand, systematic investigations about their robustness are still lacking. In this paper, we first construct a neutral network, and apply it to r-band images of luminous red galaxies (LRGs) of the Kilo Degree Survey (KiDS) Data Release 3 to search for strong lensing systems. We build two sets of training samples, one fully from simulations, and the other one using the LRG stamps from KiDS observations as the foreground lens images. With the former training sample, we find 48 high probability candidates after human inspection, and among them, 27 are newly identified. Using the latter training set, about 67 per cent of the aforementioned 48 candidates are also found, and there are 11 more new strong lensing candidates identified. We then carry out tests on the robustness of the network performance with respect to the variation of PSF. With the testing samples constructed using PSF in the range of 0.4–2 times of the median PSF of the training sample, we find that our network performs rather stable, and the degradation is small. We also investigate how the volume of the training set can affect our network performance by varying it from 0.1 to 0.8 million. The output results are rather stable showing that within the considered range, our network performance is not very sensitive to the volume size.

2020 ◽  
Vol 644 ◽  
pp. A168
Author(s):  
G. Guiglion ◽  
G. Matijevič ◽  
A. B. A. Queiroz ◽  
M. Valentini ◽  
M. Steinmetz ◽  
...  

Context. Data-driven methods play an increasingly important role in the field of astrophysics. In the context of large spectroscopic surveys of stars, data-driven methods are key in deducing physical parameters for millions of spectra in a short time. Convolutional neural networks (CNNs) enable us to connect observables (e.g. spectra, stellar magnitudes) to physical properties (atmospheric parameters, chemical abundances, or labels in general). Aims. We test whether it is possible to transfer the labels derived from a high-resolution stellar survey to intermediate-resolution spectra of another survey by using a CNN. Methods. We trained a CNN, adopting stellar atmospheric parameters and chemical abundances from APOGEE DR16 (resolution R = 22 500) data as training set labels. As input, we used parts of the intermediate-resolution RAVE DR6 spectra (R ∼ 7500) overlapping with the APOGEE DR16 data as well as broad-band ALL_WISE and 2MASS photometry, together with Gaia DR2 photometry and parallaxes. Results. We derived precise atmospheric parameters Teff, log(g), and [M/H], along with the chemical abundances of [Fe/H], [α/M], [Mg/Fe], [Si/Fe], [Al/Fe], and [Ni/Fe] for 420 165 RAVE spectra. The precision typically amounts to 60 K in Teff, 0.06 in log(g) and 0.02−0.04 dex for individual chemical abundances. Incorporating photometry and astrometry as additional constraints substantially improves the results in terms of the accuracy and precision of the derived labels, as long as we operate in those parts of the parameter space that are well-covered by the training sample. Scientific validation confirms the robustness of the CNN results. We provide a catalogue of CNN-trained atmospheric parameters and abundances along with their uncertainties for 420 165 stars in the RAVE survey. Conclusions. CNN-based methods provide a powerful way to combine spectroscopic, photometric, and astrometric data without the need to apply any priors in the form of stellar evolutionary models. The developed procedure can extend the scientific output of RAVE spectra beyond DR6 to ongoing and planned surveys such as Gaia RVS, 4MOST, and WEAVE. We call on the community to place a particular collective emphasis and on efforts to create unbiased training samples for such future spectroscopic surveys.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


Author(s):  
G. Touya ◽  
F. Brisebard ◽  
F. Quinton ◽  
A. Courtial

Abstract. Visually impaired people cannot use classical maps but can learn to use tactile relief maps. These tactile maps are crucial at school to learn geography and history as well as the other students. They are produced manually by professional transcriptors in a very long and costly process. A platform able to generate tactile maps from maps scanned from geography textbooks could be extremely useful to these transcriptors, to fasten their production. As a first step towards such a platform, this paper proposes a method to infer the scale and the content of the map from its image. We used convolutional neural networks trained with a few hundred maps from French geography textbooks, and the results show promising results to infer labels about the content of the map (e.g. ”there are roads, cities and administrative boundaries”), and to infer the extent of the map (e.g. a map of France or of Europe).


2021 ◽  
Author(s):  
Kosuke Honda ◽  
Hamido Fujita

In recent years, template-based methods such as Siamese network trackers and Correlation Filter (CF) based trackers have achieved state-of-the-art performance in several benchmarks. Recent Siamese network trackers use deep features extracted from convolutional neural networks to locate the target. However, the tracking performance of these trackers decreases when there are similar distractors to the object and the target object is deformed. On the other hand, correlation filter (CF)-based trackers that use handcrafted features (e.g., HOG features) to spatially locate the target. These two approaches have complementary characteristics due to differences in learning methods, features used, and the size of search regions. Also, we found that these trackers are complementary in terms of performance in benchmarking. Therefore, we propose the “Complementary Tracking framework using Average peak-to-correlation energy” (CTA). CTA is the generic object tracking framework that connects CF-trackers and Siamese-trackers in parallel and exploits the complementary features of these. In CTA, when a tracking failure of the Siamese tracker is detected using Average peak-to-correlation energy (APCE), which is an evaluation index of the response map matrix, the CF-trackers correct the output. In experimental on OTB100, CTA significantly improves the performance over the original tracker for several combinations of Siamese-trackers and CF-rackers.


2019 ◽  
Vol 491 (2) ◽  
pp. 2280-2300 ◽  
Author(s):  
Kaushal Sharma ◽  
Ajit Kembhavi ◽  
Aniruddha Kembhavi ◽  
T Sivarani ◽  
Sheelu Abraham ◽  
...  

ABSTRACT Due to the ever-expanding volume of observed spectroscopic data from surveys such as SDSS and LAMOST, it has become important to apply artificial intelligence (AI) techniques for analysing stellar spectra to solve spectral classification and regression problems like the determination of stellar atmospheric parameters Teff, $\rm {\log g}$, and [Fe/H]. We propose an automated approach for the classification of stellar spectra in the optical region using convolutional neural networks (CNNs). Traditional machine learning (ML) methods with ‘shallow’ architecture (usually up to two hidden layers) have been trained for these purposes in the past. However, deep learning methods with a larger number of hidden layers allow the use of finer details in the spectrum which results in improved accuracy and better generalization. Studying finer spectral signatures also enables us to determine accurate differential stellar parameters and find rare objects. We examine various machine and deep learning algorithms like artificial neural networks, Random Forest, and CNN to classify stellar spectra using the Jacoby Atlas, ELODIE, and MILES spectral libraries as training samples. We test the performance of the trained networks on the Indo-U.S. Library of Coudé Feed Stellar Spectra (CFLIB). We show that using CNNs, we are able to lower the error up to 1.23 spectral subclasses as compared to that of two subclasses achieved in the past studies with ML approach. We further apply the trained model to classify stellar spectra retrieved from the SDSS data base with SNR > 20.


2019 ◽  
Vol 19 (04) ◽  
pp. 1950019 ◽  
Author(s):  
Maissa Hamouda ◽  
Karim Saheb Ettabaa ◽  
Med Salim Bouhlel

Convolutional neural networks (CNN) can learn deep feature representation for hyperspectral imagery (HSI) interpretation and attain excellent accuracy of classification if we have many training samples. Due to its superiority in feature representation, several works focus on it, among which a reliable classification approach based on CNN, used filters generated from cluster framework, like k Means algorithm, yielded good results. However, the kernels number to be manually assigned. To solve this problem, a HSI classification framework based on CNN, where the convolutional filters to be adaptatively learned from the data, by grouping without knowing the cluster number, has recently proposed. This framework, based on the two algorithms CNN and kMeans, showed high accuracy results. So, in the same context, we propose an architecture based on the depth convolution al neural networks principle, where kernels are adaptatively learned, using CkMeans network, to generate filters without knowing the number of clusters, for hyperspectral classification. With adaptive kernels, the proposed framework automatic kernels selection by CkMeans algorithm (AKSCCk) achieves a better classification accuracy compared to the previous frameworks. The experimental results show the effectiveness and feasibility of AKSCCk approach.


2019 ◽  
Author(s):  
Astrid A. Zeman ◽  
J. Brendan Ritchie ◽  
Stefania Bracci ◽  
Hans Op de Beeck

AbstractDeep Convolutional Neural Networks (CNNs) are gaining traction as the benchmark model of visual object recognition, with performance now surpassing humans. While CNNs can accurately assign one image to potentially thousands of categories, network performance could be the result of layers that are tuned to represent the visual shape of objects, rather than object category, since both are often confounded in natural images. Using two stimulus sets that explicitly dissociate shape from category, we correlate these two types of information with each layer of multiple CNNs. We also compare CNN output with fMRI activation along the human visual ventral stream by correlating artificial with biological representations. We find that CNNs encode category information independently from shape, peaking at the final fully connected layer in all tested CNN architectures. Comparing CNNs with fMRI brain data, early visual cortex (V1) and early layers of CNNs encode shape information. Anterior ventral temporal cortex encodes category information, which correlates best with the final layer of CNNs. The interaction between shape and category that is found along the human visual ventral pathway is echoed in multiple deep networks. Our results suggest CNNs represent category information independently from shape, much like the human visual system.


Jurnal INFORM ◽  
2020 ◽  
Vol 5 (2) ◽  
pp. 99
Author(s):  
Andi Sanjaya ◽  
Endang Setyati ◽  
Herman Budianto

This research was conducted to observe the use of architectural model Convolutional Neural Networks (CNN) LeNEt, which was suitable to use for Pandava mask objects. The Data processing in the research was 200 data for each class or similar with 1000 trial data. Architectural model CNN LeNET used input layer 32x32, 64x64, 128x128, 224x224 and 256x256. The trial result with the input layer 32x32 succeeded, showing a faster time compared to the other layer. The result of accuracy value and validation was not under fitted or overfit. However, when the activation of the second dense process as changed from the relu to sigmoid, the result was better in sigmoid, in the tem of time, and the possibility of overfitting was less. The research result had a mean accuracy value of 0.96.


Sign in / Sign up

Export Citation Format

Share Document