The ellipsoidal nested sampling and the expression of the model uncertainty in measurements

2016 ◽  
Vol 30 (15) ◽  
pp. 1541002
Author(s):  
Gianpiero Gervino ◽  
Giovanni Mana ◽  
Carlo Palmisano

In this paper, we consider the problems of identifying the most appropriate model for a given physical system and of assessing the model contribution to the measurement uncertainty. The above problems are studied in terms of Bayesian model selection and model averaging. As the evaluation of the “evidence” [Formula: see text], i.e., the integral of Likelihood × Prior over the space of the measurand and the parameters, becomes impracticable when this space has [Formula: see text] dimensions, it is necessary to consider an appropriate numerical strategy. Among the many algorithms for calculating [Formula: see text], we have investigated the ellipsoidal nested sampling, which is a technique based on three pillars: The study of the iso-likelihood contour lines of the integrand, a probabilistic estimate of the volume of the parameter space contained within the iso-likelihood contours and the random samplings from hyperellipsoids embedded in the integration variables. This paper lays out the essential ideas of this approach.

2021 ◽  
Author(s):  
Carlos R Oliveira ◽  
Eugene D Shapiro ◽  
Daniel M Weinberger

Vaccine effectiveness (VE) studies are often conducted after the introduction of new vaccines to ensure they provide protection in real-world settings. Although susceptible to confounding, the test-negative case-control study design is the most efficient method to assess VE post-licensure. Control of confounding is often needed during the analyses, which is most efficiently done through multivariable modeling. When a large number of potential confounders are being considered, it can be challenging to know which variables need to be included in the final model. This paper highlights the importance of considering model uncertainty by re-analyzing a Lyme VE study using several confounder selection methods. We propose an intuitive Bayesian Model Averaging (BMA) framework for this task and compare the performance of BMA to that of traditional single-best-model-selection methods. We demonstrate how BMA can be advantageous in situations when there is uncertainty about model selection by systematically considering alternative models and increasing transparency.


2021 ◽  
Author(s):  
Marvin Höge ◽  
Anneli Guthke ◽  
Wolfgang Nowak

<p>In environmental modelling it is usually the case that multiple models are plausible, e.g. for predicting a certain quantity of interest. Using model rating methods, we typically want to elicit a single best one or the optimal average of these models. However, often, such methods are not properly applied which can lead to false conclusions.</p><p>At the examples of three different Bayesian approaches to model selection or averaging (namely 1. Bayesian Model Selection and Averaging (BMS/BMA), 2. Pseudo-BMS/BMA and 3. Bayesian Stacking), we show how very similarly looking methods pursue vastly different goals and lead to deviating results for model selection or averaging.</p><p>All three yield a weighted average of predictive distributions. Yet, only Bayesian Stacking has the goal of averaging for improved predictions in the sense of an actual (optimal) model combination. The other approaches pursue the quest of finding a single best model as the ultimate goal - yet, on different premises - and use model averaging only as a preliminary stage to prevent rash model choice.</p><p>We want to foster their proper use by, first, clarifying their theoretical background and, second, contrasting their behaviors in an applied groundwater modelling task. Third, we show how the insights gained from these Bayesian methods are transferrable to other (also non-Bayesian) model rating methods and we pose general conclusions about multi-model usage based on model weighting.</p><p> </p><p> </p>


2016 ◽  
Author(s):  
Joram Soch ◽  
Achim Pascal Meyer ◽  
John-Dylan Haynes ◽  
Carsten Allefeld

AbstractIn functional magnetic resonance imaging (fMRI), model quality of general linear models (GLMs) for first-level analysis is rarely assessed. In recent work (Soch et al., 2016: “How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection”, NeuroImage, vol. 141, pp. 469-489; DOI: 10.1016/j. neuroimage.2016.07.047), we have introduced cross-validated Bayesian model selection (cvBMS) to infer the best model for a group of subjects and use it to guide second-level analysis. While this is the optimal approach given that the same GLM has to be used for all subjects, there is a much more efficient procedure when model selection only addresses nuisance variables and regressors of interest are included in all candidate models. In this work, we propose cross-validated Bayesian model averaging (cvBMA) to improve parameter estimates for these regressors of interest by combining information from all models using their posterior probabilities. This is particularly useful as different models can lead to different conclusions regarding experimental effects and the most complex model is not necessarily the best choice. We find that cvBMS can prevent not detecting established effects and that cvBMA can be more sensitive to experimental effects than just using even the best model in each subject or the model which is best in a group of subjects.


2019 ◽  
Vol 220 (2) ◽  
pp. 1368-1378
Author(s):  
M Bertin ◽  
S Marin ◽  
C Millet ◽  
C Berge-Thierry

SUMMARY In low-seismicity areas such as Europe, seismic records do not cover the whole range of variable configurations required for seismic hazard analysis. Usually, a set of empirical models established in such context (the Mediterranean Basin, northeast U.S.A., Japan, etc.) is considered through a logic-tree-based selection process. This approach is mainly based on the scientist’s expertise and ignores the uncertainty in model selection. One important and potential consequence of neglecting model uncertainty is that we assign more precision to our inference than what is warranted by the data, and this leads to overly confident decisions and precision. In this paper, we investigate the Bayesian model averaging (BMA) approach, using nine ground-motion prediction equations (GMPEs) issued from several databases. The BMA method has become an important tool to deal with model uncertainty, especially in empirical settings with large number of potential models and relatively limited number of observations. Two numerical techniques, based on the Markov chain Monte Carlo method and the maximum likelihood estimation approach, for implementing BMA are presented and applied together with around 1000 records issued from the RESORCE-2013 database. In the example considered, it is shown that BMA provides both a hierarchy of GMPEs and an improved out-of-sample predictive performance.


Sign in / Sign up

Export Citation Format

Share Document