scholarly journals The LUMBA UVES stellar parameter pipeline

2019 ◽  
Vol 629 ◽  
pp. A74
Author(s):  
Alvin Gavel ◽  
Pieter Gruyters ◽  
Ulrike Heiter ◽  
Andreas J. Korn ◽  
Karin Lind ◽  
...  

Context. The Gaia-ESO Survey has taken high-quality spectra of a subset of 100 000 stars observed with the Gaia spacecraft. The goal for this subset is to derive chemical abundances for these stars that will complement the astrometric data collected by Gaia. Deriving the chemical abundances requires that the stellar parameters be determined. Aims. We present a pipeline for deriving stellar parameters from spectra observed with the FLAMES-UVES spectrograph in its standard fibre-fed mode centred on 580 nm, as used in the Gaia-ESO Survey. We quantify the performance of the pipeline in terms of systematic offsets and scatter. In doing so, we present a general method for benchmarking stellar parameter determination pipelines. Methods. Assuming a general model of the errors in stellar parameter pipelines, together with a sample of spectra of stars whose stellar parameters are known from fundamental measurements and relations, we use a Markov chain Monte Carlo method to quantitatively test the pipeline. Results. We find that the pipeline provides parameter estimates with systematic errors on effective temperature below 100 K, on surface gravity below 0.1 dex, and on metallicity below 0.05 dex for the main spectral types of star observed in the Gaia-ESO Survey and tested here. The performance on red giants is somewhat lower. Conclusions. The pipeline performs well enough to fulfil its intended purpose within the Gaia-ESO Survey. It is also general enough that it can be put to use on spectra from other surveys or other spectrographs similar to FLAMES-UVES.

Author(s):  
V. Hambaryan ◽  
R. Neuh¨auser

We searched for the high-velocity and isolated neutron stars that encountered in the past with a stellar cluster/association closer than 20 pc. We took about 830000 stars with the high-quality astrometry and radial velocities from the Gaia DR2 catalogue and empirically selected about 560 high-velocity stars. We used a full gravitational potential of the Galaxy to calculate the motion of a stellar cluster/association and a candidate of high-velocity star from their current positions to the proximity epoch. For these calculations we used a numerical integration in rectangular, Galactocentric coordinates. We used a covariance matrices of the astrometric data for each star to estimate the accuracy of the obtained proximity distance and epoch. For this aim we used a Monte Carlo method, replaced each star with 10 000 of its simulations and studied the distribution of their individual close passages near a stellar cluster/association. In addition, we investigated a neutron star/runaway star pairs very likely both ejected from binary system during supernova event.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 817
Author(s):  
Fernando López ◽  
Mariano Matilla-García ◽  
Jesús Mur ◽  
Manuel Ruiz Marín

A novel general method for constructing nonparametric hypotheses tests based on the field of symbolic analysis is introduced in this paper. Several existing tests based on symbolic entropy that have been used for testing central hypotheses in several branches of science (particularly in economics and statistics) are particular cases of this general approach. This family of symbolic tests uses few assumptions, which increases the general applicability of any symbolic-based test. Additionally, as a theoretical application of this method, we construct and put forward four new statistics to test for the null hypothesis of spatiotemporal independence. There are very few tests in the specialized literature in this regard. The new tests were evaluated with the mean of several Monte Carlo experiments. The results highlight the outstanding performance of the proposed test.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


2014 ◽  
Vol 70 (3) ◽  
pp. 248-256 ◽  
Author(s):  
Julian Henn ◽  
Kathrin Meindl

The formerly introduced theoreticalRvalues [Henn & Schönleber (2013).Acta Cryst.A69, 549–558] are used to develop a relative indicator of systematic errors in model refinements,Rmeta, and applied to published charge-density data. The counter ofRmetagives an absolute measure of systematic errors in percentage points. The residuals (Io−Ic)/σ(Io) of published data are examined. It is found that most published models correspond to residual distributions that are not consistent with the assumption of a Gaussian distribution. The consistency with a Gaussian distribution, however, is important, as the model parameter estimates and their standard uncertainties from a least-squares procedure are valid only under this assumption. The effect of correlations introduced by the structure model is briefly discussed with the help of artificial data and discarded as a source of serious correlations in the examined example. Intensity and significance cutoffs applied in the refinement procedure are found to be mechanisms preventing residual distributions from becoming Gaussian. Model refinements against artificial data yield zero or close-to-zero values forRmetawhen the data are not truncated and small negative values in the case of application of a moderate cutoffIo> 0. It is well known from the literature that the application of cutoff values leads to model bias [Hirshfeld & Rabinovich (1973).Acta Cryst.A29, 510–513].


1965 ◽  
Vol 18 (2) ◽  
pp. 119 ◽  
Author(s):  
AA Barker

A general method is presented for computation of radial distribution functions for plasmas over a wide range of temperatures and densities. The method uses the Monte Carlo technique applied by Wood and Parker, and extends this to long-range forces using results borrowed from crystal lattice theory. The approach is then used to calculate the radial distribution functions for a proton-electron plasma of density 1018 electrons/cm3 at a temperature of 104 OK. The results show the usefulness of the method if sufficient computing facilities are available.


2016 ◽  
Vol 151 (6) ◽  
pp. 144 ◽  
Author(s):  
Ana E. García Pérez ◽  
Carlos Allende Prieto ◽  
Jon A. Holtzman ◽  
Matthew Shetrone ◽  
Szabolcs Mészáros ◽  
...  

2000 ◽  
Vol 198 ◽  
pp. 234-235
Author(s):  
R. D. D. Costa ◽  
J. A. de Freitas Pacheco ◽  
T. P. Idiart

In this work we report new high quality spectroscopic data for a sample of PNe in the SMC, aiming to derive physical parameters and chemical abundances, in particular to settle the question concerning the oxygen discrepancy found for type I planetaries with respect to stars and HII regions.


2002 ◽  
Vol 6 (5) ◽  
pp. 883-898 ◽  
Author(s):  
K. Engeland ◽  
L. Gottschalk

Abstract. This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC) analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1) process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis


1989 ◽  
Vol 26 (2) ◽  
pp. 214-221 ◽  
Author(s):  
Subhash Sharma ◽  
Srinivas Durvasula ◽  
William R. Dillon

The authors report some results on the behavior of alternative covariance structure estimation procedures in the presence of non-normal data. They conducted Monté Carlo simulation experiments with a factorial design involving three levels of skewness, three level of kurtosis, and three different sample sizes. For normal data, among all the elliptical estimation techniques, elliptical reweighted least squares (ERLS) was equivalent in performance to ML. However, as expected, for non-normal data parameter estimates were unbiased for ML and the elliptical estimation techniques, whereas the bias in standard errors was substantial for GLS and ML. Among elliptical estimation techniques, ERLS was superior in performance. On the basis of the simulation results, the authors recommend that researchers use ERLS for both normal and non-normal data.


Sign in / Sign up

Export Citation Format

Share Document