scholarly journals Can Cattle Basis Forecasts Be Improved? A Bayesian Model Averaging Approach

2019 ◽  
Vol 51 (02) ◽  
pp. 249-266
Author(s):  
Nicholas D. Payne ◽  
Berna Karali ◽  
Jeffrey H. Dorfman

AbstractBasis forecasting is important for producers and consumers of agricultural commodities in their risk management decisions. However, the best performing forecasting model found in previous studies varies substantially. Given this inconsistency, we take a Bayesian approach, which addresses model uncertainty by combining forecasts from different models. Results show model performance differs by location and forecast horizon, but the forecast from the Bayesian approach often performs favorably. In some cases, however, the simple moving averages have lower forecast errors. Besides the nearby basis, we also examine basis in a specific month and find that regression-based models outperform others in longer horizons.


Universe ◽  
2020 ◽  
Vol 6 (8) ◽  
pp. 109 ◽  
Author(s):  
David Kipping

The Simulation Argument posed by Bostrom suggests that we may be living inside a sophisticated computer simulation. If posthuman civilizations eventually have both the capability and desire to generate such Bostrom-like simulations, then the number of simulated realities would greatly exceed the one base reality, ostensibly indicating a high probability that we do not live in said base reality. In this work, it is argued that since the hypothesis that such simulations are technically possible remains unproven, statistical calculations need to consider not just the number of state spaces, but the intrinsic model uncertainty. This is achievable through a Bayesian treatment of the problem, which is presented here. Using Bayesian model averaging, it is shown that the probability that we are sims is in fact less than 50%, tending towards that value in the limit of an infinite number of simulations. This result is broadly indifferent as to whether one conditions upon the fact that humanity has not yet birthed such simulations, or ignore it. As argued elsewhere, it is found that if humanity does start producing such simulations, then this would radically shift the odds and make it very probably we are in fact simulated.



2021 ◽  
Author(s):  
Carlos R Oliveira ◽  
Eugene D Shapiro ◽  
Daniel M Weinberger

Vaccine effectiveness (VE) studies are often conducted after the introduction of new vaccines to ensure they provide protection in real-world settings. Although susceptible to confounding, the test-negative case-control study design is the most efficient method to assess VE post-licensure. Control of confounding is often needed during the analyses, which is most efficiently done through multivariable modeling. When a large number of potential confounders are being considered, it can be challenging to know which variables need to be included in the final model. This paper highlights the importance of considering model uncertainty by re-analyzing a Lyme VE study using several confounder selection methods. We propose an intuitive Bayesian Model Averaging (BMA) framework for this task and compare the performance of BMA to that of traditional single-best-model-selection methods. We demonstrate how BMA can be advantageous in situations when there is uncertainty about model selection by systematically considering alternative models and increasing transparency.



2019 ◽  
Vol 220 (2) ◽  
pp. 1368-1378
Author(s):  
M Bertin ◽  
S Marin ◽  
C Millet ◽  
C Berge-Thierry

SUMMARY In low-seismicity areas such as Europe, seismic records do not cover the whole range of variable configurations required for seismic hazard analysis. Usually, a set of empirical models established in such context (the Mediterranean Basin, northeast U.S.A., Japan, etc.) is considered through a logic-tree-based selection process. This approach is mainly based on the scientist’s expertise and ignores the uncertainty in model selection. One important and potential consequence of neglecting model uncertainty is that we assign more precision to our inference than what is warranted by the data, and this leads to overly confident decisions and precision. In this paper, we investigate the Bayesian model averaging (BMA) approach, using nine ground-motion prediction equations (GMPEs) issued from several databases. The BMA method has become an important tool to deal with model uncertainty, especially in empirical settings with large number of potential models and relatively limited number of observations. Two numerical techniques, based on the Markov chain Monte Carlo method and the maximum likelihood estimation approach, for implementing BMA are presented and applied together with around 1000 records issued from the RESORCE-2013 database. In the example considered, it is shown that BMA provides both a hierarchy of GMPEs and an improved out-of-sample predictive performance.



2020 ◽  
Vol 58 (3) ◽  
pp. 644-719 ◽  
Author(s):  
Mark F. J. Steel

The method of model averaging has become an important tool to deal with model uncertainty, for example in situations where a large amount of different theories exist, as are common in economics. Model averaging is a natural and formal response to model uncertainty in a Bayesian framework, and most of the paper deals with Bayesian model averaging. The important role of the prior assumptions in these Bayesian procedures is highlighted. In addition, frequentist model averaging methods are also discussed. Numerical techniques to implement these methods are explained, and I point the reader to some freely available computational resources. The main focus is on uncertainty regarding the choice of covariates in normal linear regression models, but the paper also covers other, more challenging, settings, with particular emphasis on sampling models commonly used in economics. Applications of model averaging in economics are reviewed and discussed in a wide range of areas including growth economics, production modeling, finance and forecasting macroeconomic quantities. (JEL C11, C15, C20, C52, O47).





Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2159
Author(s):  
Francisco-José Vázquez-Polo ◽  
Miguel-Ángel Negrín-Hernández ◽  
María Martel-Escobar

In meta-analysis, the existence of between-sample heterogeneity introduces model uncertainty, which must be incorporated into the inference. We argue that an alternative way to measure this heterogeneity is by clustering the samples and then determining the posterior probability of the cluster models. The meta-inference is obtained as a mixture of all the meta-inferences for the cluster models, where the mixing distribution is the posterior model probabilities. When there are few studies, the number of cluster configurations is manageable, and the meta-inferences can be drawn with BMA techniques. Although this topic has been relatively neglected in the meta-analysis literature, the inference thus obtained accurately reflects the cluster structure of the samples used. In this paper, illustrative examples are given and analysed, using real binary data.



Sign in / Sign up

Export Citation Format

Share Document