BAYESIAN STATISTICAL METHODS FOR ASTRONOMY PART III: MODEL BUILDING

2020 ◽  
pp. 59-76
Author(s):  
David C. Stenning ◽  
David A. van Dyk
2016 ◽  
Vol 1140 ◽  
pp. 384-391 ◽  
Author(s):  
Andreas Heyder ◽  
Stefan Steinbeck ◽  
Matthaeus Brela ◽  
Alexander Meyer ◽  
Sandra Abersfelder ◽  
...  

Electromagnetic actuators are used in a variety of technical applications especially in the automotive industry. In-line process control methods are an essential component of the Lean and Six Sigma methodology to ensure process quality. However, the current state of the art in process and quality control is largely limited to end-of-line measurements of the force output. Analysing the magnetic stray field is a promising method that can be used to draw conclusions on the properties and defects of the flux-conducting magnetic materials. This phenomenon can potentially be used to identify defects in magnetic actuators thus allowing inline quality-monitoring. In order to realize this feature, patterns in the magnetic stray field of an actuator have to be identified and linked to a specific defect. The resulting challenge is the analysis of large datasets in order to characterize the stray field anomalies. This paper summarizes the results of a study on linear magnetic actuators trying to prove a relationship between parasitic magnetic stray field and the overall force output of an actuator by analysing the data with statistical methods. The findings of this study suggest that certain statistical methods, like regression, are not well suited to build a prediction model for defects in actuators using a similar approach of measuring stray field outside the actuator. This is mainly due to the fact that prerequisites for model building are difficult to full fill within the context of stray field analysis. Nevertheless, the findings also suggest that methods of exploratory data analysis can be used to derive quality relevant information from data of stray field measurements. The paper elaborates on the problem of defining a population, choosing variables for model building, as well as model error.


Author(s):  
Russell Cheng

This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates behave unusually, and (ii) the use of parametric bootstrap resampling methods in analysing such problems. Simple and practical model building is an underlying theme. A frequentist viewpoint based on likelihood is adopted, for which there is a well-established and very practical theory. The standard situation is where certain widely applicable regularity conditions hold. However, there are many apparently innocuous situations where standard theory breaks down, sometimes spectacularly. Most of the departures from regularity are described geometrically in the book, with mathematical detail only sufficient to clarify the non-standard nature of a problem and to allow formulation of practical solutions. The book is intended for anyone with a basic knowledge of statistical methods typically covered in a university statistical inference course who wishes to understand or study how standard methodology might fail. Simple, easy-to-understand statistical methods are presented which overcome these difficulties, and illustrated by detailed examples drawn from real applications. Parametric bootstrap resampling is used throughout for analysing the properties of fitted models, illustrating its ease of implementation even in non-standard situations. Distributional properties are obtained numerically for estimators or statistics not previously considered in the literature because their theoretical distributional properties are too hard to obtain theoretically. Bootstrap results are presented mainly graphically in the book, providing easy-to-understand demonstration of the sampling behaviour of estimators.


2010 ◽  
Vol 66 (4) ◽  
pp. 470-478 ◽  
Author(s):  
Kevin Cowtan

Classical density-modification techniques (as opposed to statistical approaches) offer a computationally cheap method for improving phase estimates in order to provide a good electron-density map for model building. The rise of statistical methods has lead to a shift in focus away from the classical approaches; as a result, some recent developments have not made their way into classical density-modification software. This paper describes the application of some recent techniques, including most importantly the use of prior phase information in the likelihood estimation of phase errors within a classical density-modification framework. The resulting software gives significantly better results than comparable classical methods, while remaining nearly two orders of magnitude faster than statistical methods.


1996 ◽  
Vol 172 ◽  
pp. 447-450 ◽  
Author(s):  
M. L. Bougeard ◽  
J.-F. Bange ◽  
M. Mahfouz ◽  
A. Bec-Borsenberger

In order to evaluate a possible rotation between the Hipparcos and the dynamical reference frames, Hipparcos minor planets preliminary data are analysed. The resolution of the problem is very sensitive to correlations induced by the short length of the interval of observation. Several statistical methods are performed to appreciate the factors of bad conditioning. A procedure for variable selection and model building is given.


Author(s):  
Russell Cheng

This chapter provides an overview of the book. The book investigates non-standard parametric, mainly continuous univariate estimation problems. The basic difference between standard and non-standard problems is explained in this chapter. The book considers different non-standard problems that can arise. Though some of the problems are advanced, a strong emphasis is placed on providing statistical methods to analyse them that are simple to understand and implement. Maximum likelihood (ML) estimation is the main method used to estimate parameters when fitting parametric models. This chapter outlines the method, emphasizing how it can be implemented numerically. Parametric bootstrapping is used throughout the book to analyse the statistical behaviour of estimators. This chapter gives the rationale of the approach, explaining its simplicity and wide applicability. Also explained is the underlying model building theme of the book.


1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


Sign in / Sign up

Export Citation Format

Share Document