posterior predictive checks
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 6)

H-INDEX

11
(FIVE YEARS 1)

2020 ◽  
Author(s):  
Mathieu Maheu-Giroux ◽  
Lynnmarie Sardinha ◽  
Heidi Stöckl ◽  
Sarah Meyer ◽  
Arnaud Godin ◽  
...  

AbstractBackgroundAccurate and reliable estimates of violence against women statistics form the backbone of monitoring efforts to eliminate these human right violations and public health concerns. Estimating the prevalence of intimate partner violence (IPV) is challenging due to variations in case definition and recall period, surveyed populations, partner definition, level of age disaggregation, and survey representativeness, among others. In this paper, we aim to develop a sound and flexible statistical modeling framework for global, regional, and national IPV statistics.MethodsWe modeled IPV within a Bayesian multilevel modeling framework, accounting for heterogeneity of age groups using age-standardization, and age patterns and time trends using splines functions. Survey comparability is achieved using adjustment factors which are estimated using exact matching and their uncertainty accounted for. Both in-sample and out-of-sample comparisons are used for model validation, including posterior predictive checks. Post-processing of models’ outputs is performed to aggregate estimates at different geographic levels and age groups.ResultsA total of 307 unique studies conducted between 2000-2018, from 154 countries, and totaling nearly 1.8 million unique women responses informed lifetime IPV. Past year IPV had similar number of studies (n=333), countries represented (n=159), and individual responses (n=1.8 million). Roughly half of IPV observations required some adjustments. Posterior predictive checks suggest good model fit to data and out-of-sample comparisons provided reassuring results with small median prediction errors and appropriate coverage of predictions’ intervals.ConclusionsThe proposed modeling framework can pool both national and sub-national surveys, account for heterogeneous age groups and age trends, accommodate different surveyed population, adjust for differences in survey instruments, and efficiently propagate uncertainty to model outputs. By describing this model to reproducible levels of details, the accurate interpretation and responsible use of estimates for global monitoring of violence against women elimination efforts are supported, as part of the Sustainable Development Goals.


2020 ◽  
Author(s):  
Andrew F. Magee ◽  
Sarah K. Hilton ◽  
William S. DeWitt

AbstractLikelihood-based phylogenetic inference posits a probabilistic model of character state change along branches of a phylogenetic tree. These models typically assume statistical independence of sites in the sequence alignment. This is a restrictive assumption that facilitates computational tractability, but ignores how epistasis, the effect of genetic background on mutational effects, influences the evolution of functional sequences. We consider the effect of using a misspecified site-independent model on the accuracy of Bayesian phylogenetic inference in the setting of pairwise-site epistasis. Previous work has shown that as alignment length increases, tree reconstruction accuracy also increases. Here, we present a simulation study demonstrating that accuracy increases with alignment size even if the additional sites are epistatically coupled. We introduce an alignment-based test statistic that is a diagnostic for pair-wise epistasis and can be used in posterior predictive checks.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9383
Author(s):  
Hiroki Itô

A lot of vegetation-related data have been collected as an ordered plant cover class that can be determined visually. However, they are difficult to analyze numerically as they are in an ordinal scale and have uncertainty in their classification. Here, I constructed a state-space model to estimate unobserved plant cover proportions (ranging from zero to one) from such cover class data. The model assumed that the data were measured longitudinally, so that the autocorrelations in the time-series could be utilized to estimate the unobserved cover proportion. The model also assumed that the quadrats where the data were collected were arranged sequentially, so that the spatial autocorrelations also could be utilized to estimate the proportion. Assuming a beta distribution as the probability distribution of the cover proportion, the model was implemented with a regularized incomplete beta function, which is the cumulative density function of the beta distribution. A simulated dataset and real datasets, with one-dimensional spatial structure and longitudinal survey, were fit to the model, and the parameters were estimated using the Markov chain Monte Carlo method. Then, the validity was examined using posterior predictive checks. As a result of the fitting, the Markov chain successfully converged to the stationary distribution, and the posterior predictive checks did not show large discrepancies. For the simulated dataset, the estimated values were close to the values used for the data generation. The estimated values for the real datasets also seemed to be reasonable. These results suggest that the proposed state-space model was able to successfully estimate the unobserved cover proportion. The present model is applicable to similar types of plant cover class data, and has the possibility to be expanded, for example, to incorporate a two-dimensional spatial structure and/or zero-inflation.


2019 ◽  
Vol 490 (1) ◽  
pp. 927-946 ◽  
Author(s):  
J Michael Burgess ◽  
Jochen Greiner ◽  
Damien Bégué ◽  
Franceso Berlato

ABSTRACT Inspired by the confirmed detection of a short gamma-ray burst (GRB) in association with a gravitational wave signal, we present the first Bayesian Fermi-Gamma-ray Burst Monitor (GBM) short GRB spectral catalogue. Both peak flux and time-resolved spectral results are presented. Data are analysed with the proper Poisson likelihood allowing us to provide statistically reliable results even for spectra with few counts. All fits are validated with posterior predictive checks. We find that nearly all spectra can be modelled with a cut-off power law. Additionally, we release the full posterior distributions and reduced data from our sample. Following our previous study, we introduce three variability classes based on the observed light-curve structure.


2018 ◽  
Vol 14 (3) ◽  
pp. 143-157
Author(s):  
Leonardo Egidi ◽  
Jonah Gabry

Abstract Although there is no consensus on how to measure and quantify individual performance in any sport, there has been less development in this area for soccer than for other major sports. And only once this measurement is defined, does modeling for predictive purposes make sense. We use the player ratings provided by a popular Italian fantasy soccer game as proxies for the players’ performance; we discuss the merits and flaws of a variety of hierarchical Bayesian models for predicting these ratings, comparing the models on their predictive accuracy on hold-out data. Our central goals are to explore what can be accomplished with a simple freely available dataset comprising only a few variables from the 2015–2016 season in the top Italian league, Serie A, and to focus on a small number of interesting modeling and prediction questions that arise. Among these, we highlight the importance of modeling the missing observations and we propose two models designed for this task. We validate our models through graphical posterior predictive checks and we provide out-of-sample predictions for the second half of the season, using the first half as a training set. We use Stan to sample from the posterior distributions via Markov chain Monte Carlo.


2017 ◽  
Author(s):  
Donald Ray Williams ◽  
Stephen Ross Martin

Developing robust statistical methods is an important goal for psychological science. Whereas classical methods (i.e., sampling distributions, p-values, etc.) have been thoroughly characterized, Bayesian robust methods remain relatively uncommon in practice and methodological literatures. Here we propose a robust Bayesian model (BHS t ) that accommodates heterogeneous (H) variances by predicting the scale parameter on the log scale and tail-heaviness with a Student-t likelihood (S t). Through simulations with normative and contaminated (i.e., heavy-tailed) data, we demonstrate that BHS t has consistent frequentist properties in terms of type I error, power, and mean squared error compared to three classical robust methods. With a motivating example, we illustrate Bayesian inferential methods such as approximate leave-one-out cross-validation and posterior predictive checks. We end by suggesting areas of improvement for BHS t and discussing Bayesian robust methods in practice.


2017 ◽  
Author(s):  
Jeromy Anglim ◽  
Sarah K. A. Wynton

The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking task, which logged participant actions, enabling measurement of strategy use and subtask performance. Model comparison was performed using DIC, posterior predictive checks, plots of model fits, and model recovery simulations. Results showed that while learning tended to be monotonically decreasing and decelerating, and approaching an asymptote for all subtasks, there was substantial inconsistency in learning curves both at the group- and individual-levels. This inconsistency was most apparent when constraining both the rate and the ratio of learning to asymptote to be equal across subtasks, thereby giving learning curves only one parameter for scaling. The inclusion of six strategy covariates provided improved prediction of subtask performance capturing different subtask learning processes and subtask trade-offs. In addition, strategy use partially explained the inconsistency in subtask learning. Overall, the model provided a more nuanced representation of how complex tasks can be decomposed in terms of simpler learning mechanisms.


Sign in / Sign up

Export Citation Format

Share Document