scholarly journals Assessing the performance of LTE and NLTE synthetic stellar spectra in a machine learning framework

2020 ◽  
Vol 498 (3) ◽  
pp. 3817-3834 ◽  
Author(s):  
Spencer Bialek ◽  
Sébastien Fabbro ◽  
Kim A Venn ◽  
Nripesh Kumar ◽  
Teaghan O’Briain ◽  
...  

ABSTRACT In the current era of stellar spectroscopic surveys, synthetic spectral libraries are the basis for the derivation of stellar parameters and chemical abundances. In this paper, we compare the stellar parameters determined using five popular synthetic spectral grids (INTRIGOSS, FERRE, AMBRE, PHOENIX, and MPIA/1DNLTE) with our convolutional neural network (CNN, StarNet). The stellar parameters are determined for six physical properties (effective temperature, surface gravity, metallicity, [α/Fe], radial velocity, and rotational velocity) given the spectral resolution, signal-to-noise ratio, and wavelength range of optical FLAMES-UVES spectra from the Gaia-ESO Survey. Both CNN modelling and epistemic uncertainties are incorporated through training an ensemble of networks. StarNet training was also adapted to mitigate differences between the synthetic grids and observed spectra by augmenting with realistic observational signatures (i.e. resolution matching, wavelength sampling, Gaussian noise, zeroing flux values, rotational and radial velocities, continuum removal, and masking telluric regions). Using the FLAMES-UVES spectra for FGK-type dwarfs and giants as a test set, we quantify the accuracy and precision of the stellar label predictions from StarNet. We find excellent results over a wide range of parameters when StarNet is trained on the MPIA/1DNLTE synthetic grid, and acceptable results over smaller parameter ranges when trained on the 1DLTE grids. These tests also show that our CNN pipeline is highly adaptable to multiple simulation grids.

2021 ◽  
Vol 645 ◽  
pp. A35
Author(s):  
C. Boeche ◽  
A. Vallenari ◽  
S. Lucatello

Context. Ongoing and future massive spectroscopic surveys will collect very large numbers (106–107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims. We present the new version of SP_Ace (Stellar Parameters And Chemical abundances Estimator) a code that derives stellar parameters and elemental abundance from stellar spectra. The new version covers a larger spectral resolution interval (R = 2000−40 000) and its new library covers bluer wavelengths (4800–6860 Å). Methods. SP_Ace relies on the General-Curve-Of-Growth (GCOG) library based on 6700 absorption lines whose oscillator strengths were calibrated astrophysically. We developed the calibration method and applied it to all the lines. From the new line list obtained we build the GCOG library, adopting an improved method to correct for the opacity of the neighboring lines. We implemented a new line profile for the code SP_Ace that better reproduces that of synthetic spectra. This new version of SP_Ace and the GCOG library has been tested on synthetic and real spectra to establish the accuracy and precision of the derived stellar parameters. Results. SP_Ace can derive the stellar parameters Teff, log g, [M/H], and chemical abundances with satisfactory results; the accuracy depends on the spectral features that determine the quality, such as spectral resolution, signal-to-noise ratio, and wavelength coverage. Systematic errors were identified and quantified where possible. The source code is publicly available.


2018 ◽  
Vol 612 ◽  
pp. A44 ◽  
Author(s):  
K. G. Strassmeier ◽  
I. Ilyin ◽  
M. Steffen

Context. Full-disk solar flux spectra can be directly compared to stellar spectra and thereby serve as our most important reference source for, for example stellar chemical abundances, magnetic activity phenomena, radial-velocity signatures or global pulsations. Aim. As part of the first Potsdam Echelle Polarimetric and Spectroscopic Instrument (PEPSI) key-science project, we aim to provide well-exposed and average-combined (viz. deep) high-resolution spectra of representative stellar targets. Such deep spectra contain an overwhelming amount of information, typically much more than what could be analyzed and discussed within a single publication. Therefore, these spectra will be made available in form of (electronic) atlases. The first star in this series of papers is our Sun. It also acts as a system-performance cornerstone. Methods. The Sun was monitored with PEPSI at the Large Binocular Telescope (LBT). Instead of the LBT we used a small robotic solar disk integration (SDI) telescope. The deep spectra in this paper are the results of combining up to ≈100 consecutive exposures per wavelength setting and are compared with other solar flux atlases. Results. Our software for the optimal data extraction and reduction of PEPSI spectra is described and verified with the solar data. Three deep solar flux spectra with a spectral resolution of up to 270 000, a continuous wavelength coverage from 383 nm to 914 nm, and a photon signal to noise ratio (S/N) of between 2000–8000:1 depending on wavelength are presented. Additionally, a time-series of 996 high-cadence spectra in one cross disperser is used to search for intrinsic solar modulations. The wavelength calibration based on Th-Ar exposures and simultaneous Fabry–Pérot combs enables an absolute wavelength solution within 10 m s−1 (rms) with respect to the HARPS laser-comb solar atlas and a relative rms of 1.2 m s−1 for one day. For science demonstration, we redetermined the disk-average solar Li abundance to 1.09 ± 0.04 dex on the basis of 3D NLTE model atmospheres. We detected disk-averaged p-mode RV oscillations with a full amplitude of 47 cm s−1 at 5.5 min. Conclusions. Comparisons with two solar FTS atlases, as well as with the HARPS solar atlas, validate the PEPSI data product. Now, PEPSI/SDI solar-flux spectra are being taken with a sampling of one deep spectrum per day, and are supposed to continue a full magnetic cycle of the Sun.


Author(s):  
Frederik Boe Hüttel ◽  
Line Katrine Harder Clemmensen

Consistent and accurate estimation of stellar parameters is of great importance for information retrieval in astrophysical research. The parameters span a wide range from effective temperature to rotational velocity. We propose to estimate the stellar parameters directly from spectral signals coming from the HARPS-N spectrograph pipeline before any spectrum-processing steps are applied to extract the 1D spectrum. We propose an attention-based model to estimate the stellar parameters, which estimate both mean and uncertainty of the stellar parameters through estimation of the parameters of a Gaussian distribution. The estimated distributions create a basis to generate data-driven Gaussian confidence intervals for the estimated stellar parameters. We show that residual networks and attention-based models can estimate the stellar parameters with high accuracy for low Signal-to-noise ratio (SNR) compared to previous methods. With an observation of the Sun from the HARPS-N spectrograph, we show that the models can estimate stellar parameters from real observational data.


2017 ◽  
Vol 13 (S334) ◽  
pp. 21-24
Author(s):  
Haining Li ◽  
Wako Aoki ◽  
Gang Zhao ◽  
Takuma Suda ◽  
Satoshi Honda ◽  
...  

AbstractVery metal-poor (VMP) stars preserve chemical signatures of early generations of stars, and are crutial to understand the early nucleosynthesis and first stars. Millions of stellar spectra obtained by LAMOST provide an unprecedented chance to enlarge the currently limited VMP star sample. Since 2014, a joint project on searching for VMP stars has been conducted based on the LAMOST survey and Subaru follow-up observations. So far, the project has obtained chemical abundances for about 250 VMP stars and a number of chemically interesting objects, e.g., three ultra metal-poor stars with [Fe/H] ~ − 4.0, a dozen Li-rich VMP stars distributed in a wide range of evolutionary stages. Statistics of the large homogeneous sample of VMP stars will be of great interest and importance to probe the chemical enrichment in the early Galaxy and low-mass star evolution.


2019 ◽  
Vol 57 (1) ◽  
pp. 571-616 ◽  
Author(s):  
Paula Jofré ◽  
Ulrike Heiter ◽  
Caroline Soubiran

There has been an incredibly large investment in obtaining high-resolution stellar spectra for determining chemical abundances of stars. This information is crucial to answer fundamental questions in astronomy by constraining the formation and evolution scenarios of the Milky Way as well as the stars and planets residing in it. We have just entered a new era, in which chemical abundances of FGK-type stars are being produced at industrial scales, and in which the observations, reduction, and analysis of the data are automatically performed by machines. Here, we review the latest human efforts to assess the accuracy and precision of such industrial abundances by providing insights into the steps and uncertainties associated with the process of determining stellar abundances. We also provide a description of current and forthcoming spectroscopic surveys, focusing on their reported abundances and uncertainties. This allows us to identify which elements and spectral lines are best and why. Finally, we make a brief selection of main scientific questions the community is aiming to answer with abundances. ▪ Uncertainties in abundances need to be disentangled into random and systematic components. ▪ Precision can be increased by applying differential or data-driven methods based on accurate data. ▪ High-resolution and signal-to-noise spectra provide fundamental data that can be used to calibrate lower-resolution and signal-to-noise spectra of millions of stars. ▪ Different survey calibration strategies must agree on a common set of reference stars to create data products that are consistent. ▪ Data products provided by individual groups must be published using standard formats to ensure straightforward applicability.


2018 ◽  
Vol 612 ◽  
pp. A98 ◽  
Author(s):  
Rafael Garcia-Dias ◽  
Carlos Allende Prieto ◽  
Jorge Sánchez Almeida ◽  
Ignacio Ordovás-Pascual

Context. The volume of data generated by astronomical surveys is growing rapidly. Traditional analysis techniques in spectroscopy either demand intensive human interaction or are computationally expensive. In this scenario, machine learning, and unsupervised clustering algorithms in particular, offer interesting alternatives. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) offers a vast data set of near-infrared stellar spectra, which is perfect for testing such alternatives. Aims. Our research applies an unsupervised classification scheme based on K-means to the massive APOGEE data set. We explore whether the data are amenable to classification into discrete classes. Methods. We apply the K-means algorithm to 153 847 high resolution spectra (R ≈ 22 500). We discuss the main virtues and weaknesses of the algorithm, as well as our choice of parameters. Results. We show that a classification based on normalised spectra captures the variations in stellar atmospheric parameters, chemical abundances, and rotational velocity, among other factors. The algorithm is able to separate the bulge and halo populations, and distinguish dwarfs, sub-giants, RC, and RGB stars. However, a discrete classification in flux space does not result in a neat organisation in the parameters’ space. Furthermore, the lack of obvious groups in flux space causes the results to be fairly sensitive to the initialisation, and disrupts the efficiency of commonly-used methods to select the optimal number of clusters. Our classification is publicly available, including extensive online material associated with the APOGEE Data Release 12 (DR12). Conclusions. Our description of the APOGEE database can help greatly with the identification of specific types of targets for various applications. We find a lack of obvious groups in flux space, and identify limitations of the K-means algorithm in dealing with this kind of data.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ibtissame Khaoua ◽  
Guillaume Graciani ◽  
Andrey Kim ◽  
François Amblard

AbstractFor a wide range of purposes, one faces the challenge to detect light from extremely faint and spatially extended sources. In such cases, detector noises dominate over the photon noise of the source, and quantum detectors in photon counting mode are generally the best option. Here, we combine a statistical model with an in-depth analysis of detector noises and calibration experiments, and we show that visible light can be detected with an electron-multiplying charge-coupled devices (EM-CCD) with a signal-to-noise ratio (SNR) of 3 for fluxes less than $$30\,{\text{photon}}\,{\text{s}}^{ - 1} \,{\text{cm}}^{ - 2}$$ 30 photon s - 1 cm - 2 . For green photons, this corresponds to 12 aW $${\text{cm}}^{ - 2}$$ cm - 2 ≈ $$9{ } \times 10^{ - 11}$$ 9 × 10 - 11 lux, i.e. 15 orders of magnitude less than typical daylight. The strong nonlinearity of the SNR with the sampling time leads to a dynamic range of detection of 4 orders of magnitude. To detect possibly varying light fluxes, we operate in conditions of maximal detectivity $${\mathcal{D}}$$ D rather than maximal SNR. Given the quantum efficiency $$QE\left( \lambda \right)$$ Q E λ of the detector, we find $${ \mathcal{D}} = 0.015\,{\text{photon}}^{ - 1} \,{\text{s}}^{1/2} \,{\text{cm}}$$ D = 0.015 photon - 1 s 1 / 2 cm , and a non-negligible sensitivity to blackbody radiation for T > 50 °C. This work should help design highly sensitive luminescence detection methods and develop experiments to explore dynamic phenomena involving ultra-weak luminescence in biology, chemistry, and material sciences.


2021 ◽  
Vol 17 (1-2) ◽  
pp. 3-14
Author(s):  
Stathis C. Stiros ◽  
F. Moschas ◽  
P. Triantafyllidis

GNSS technology (known especially for GPS satellites) for measurement of deflections has proved very efficient and useful in bridge structural monitoring, even for short stiff bridges, especially after the advent of 100 Hz GNSS sensors. Mode computation from dynamic deflections has been proposed as one of the applications of this technology. Apart from formal modal analyses with GNSS input, and from spectral analysis of controlled free attenuating oscillations, it has been argued that simple spectra of deflections can define more than one modal frequencies. To test this scenario, we analyzed 21 controlled excitation events from a certain bridge monitoring survey, focusing on lateral and vertical deflections, recorded both by GNSS and an accelerometer. These events contain a transient and a following oscillation, and they are preceded and followed by intervals of quiescence and ambient vibrations. Spectra for each event, for the lateral and the vertical axis of the bridge, and for and each instrument (GNSS, accelerometer) were computed, normalized to their maximum value, and printed one over the other, in order to produce a single composite spectrum for each of the four sets. In these four sets, there was also marked the true value of modal frequency, derived from free attenuating oscillations. It was found that for high SNR (signal-to-noise ratio) deflections, spectral peaks in both acceleration and displacement spectra differ by up to 0.3 Hz from the true value. For low SNR, defections spectra do not match the true frequency, but acceleration spectra provide a low-precision estimate of the true frequency. This is because various excitation effects (traffic, wind etc.) contribute with numerous peaks in a wide range of frequencies. Reliable estimates of modal frequencies can hence be derived from deflections spectra only if excitation frequencies (mostly traffic and wind) can be filtered along with most measurement noise, on the basis of additional data.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Richard J. Smith ◽  
Fernando Pérez-Cota ◽  
Leonel Marques ◽  
Matt Clark

AbstractBrillouin light scattering (BLS) is an emerging method for cell imaging and characterisation. It allows elasticity-related contrast, optical resolution and label-free operation. Phonon microscopy detects BLS from laser generated coherent phonon fields to offer an attractive route for imaging since, at GHz frequencies, the phonon wavelength is sub-optical. Using phonon fields to image single cells is challenging as the signal to noise ratio and acquisition time are often poor. However, recent advances in the instrumentation have enabled imaging of fixed and living cells. This work presents the first experimental characterisation of phonon-based axial resolution provided by the response to a sharp edge. The obtained axial resolution is up to 10 times higher than that of the optical system used to take the measurements. Validation of the results are obtained with various polymer objects, which are in good agreement with those obtained using atomic force microscopy. Edge localisation, and hence profilometry, of a phantom boundary is measured with accuracy and precision of approximately 60 nm and 100 nm respectively. Finally, 3D imaging of fixed cells in culture medium is demonstrated.


2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
Karim El-Laithy ◽  
Martin Bogdan

An integration of both the Hebbian-based and reinforcement learning (RL) rules is presented for dynamic synapses. The proposed framework permits the Hebbian rule to update the hidden synaptic model parameters regulating the synaptic response rather than the synaptic weights. This is performed using both the value and the sign of the temporal difference in the reward signal after each trial. Applying this framework, a spiking network with spike-timing-dependent synapses is tested to learn the exclusive-OR computation on a temporally coded basis. Reward values are calculated with the distance between the output spike train of the network and a reference target one. Results show that the network is able to capture the required dynamics and that the proposed framework can reveal indeed an integrated version of Hebbian and RL. The proposed framework is tractable and less computationally expensive. The framework is applicable to a wide class of synaptic models and is not restricted to the used neural representation. This generality, along with the reported results, supports adopting the introduced approach to benefit from the biologically plausible synaptic models in a wide range of intuitive signal processing.


Sign in / Sign up

Export Citation Format

Share Document