Post-processing of Markov chain Monte Carlo output in Bayesian latent variable models with application to multidimensional scaling

2017 ◽  
Vol 33 (3) ◽  
pp. 1457-1473 ◽  
Author(s):  
Kensuke Okada ◽  
Shin-ichi Mayekawa
2019 ◽  
Vol 6 (11) ◽  
pp. 190619 ◽  
Author(s):  
C. M. Pooley ◽  
S. C. Bishop ◽  
A. Doeschl-Wilson ◽  
G. Marion

Markov chain Monte Carlo (MCMC) is widely used for Bayesian inference in models of complex systems. Performance, however, is often unsatisfactory in models with many latent variables due to so-called poor mixing, necessitating the development of application-specific implementations. This paper introduces ‘posterior-based proposals' (PBPs), a new type of MCMC update applicable to a huge class of statistical models (whose conditional dependence structures are represented by directed acyclic graphs). PBPs generate large joint updates in parameter and latent variable space, while retaining good acceptance rates (typically 33%). Evaluation against other approaches (from standard Gibbs/random walk updates to state-of-the-art Hamiltonian and particle MCMC methods) was carried out for widely varying model types: an individual-based model for disease diagnostic test data, a financial stochastic volatility model, a mixed model used in statistical genetics and a population model used in ecology. While different methods worked better or worse in different scenarios, PBPs were found to be either near to the fastest or significantly faster than the next best approach (by up to a factor of 10). PBPs, therefore, represent an additional general purpose technique that can be usefully applied in a wide variety of contexts.


2017 ◽  
Vol 21 (1) ◽  
pp. 34-50
Author(s):  
Muhammad Dwirifqi Kharisma Putra ◽  
Jahja Umar ◽  
Bahrul Hayat ◽  
Agung Priyo Utomo

Studi ini menggunakan simulasi Monte Carlo dilakukan untuk melihat pengaruh ukuran sampel dan intraclass correlation coefficients (ICC) terhadap bias estimasi parameter multilevel latent variable modeling. Kondisi simulasi diciptakan dengan beberapa faktor yang ditetapkan yaitu lima kondisi ICC (0.05, 0.10, 0.15, 0.20, 0.25), jumlah kelompok (30, 50, 100 dan 150), jumlah observasi dalam kelompok (10, 20 dan 50) dan diestimasi menggunakan lima metode estimasi: ML, MLF, MLR, WLSMV dan BAYES. Jumlah kondisi keseluruhan sebanyak 300 kondisi dimana tiap kondisi direplikasi sebanyak 1000 kali dan dianalisis menggunakan software Mplus 7.4. Kriteria bias yang masih dapat diterima adalah < 10%. Hasil penelitian ini menunjukkan bahwa bias yang terjadi dipengaruhi oleh ukuran sampel dan ICC, penelitian ini juga menujukkan bahwa metode estimasi WLSMV dan BAYES berfungsi lebih baik pada berbagai kondisi dibandingkan dengan metode estimasi berbasis ML.Kata kunci: multilevel latent variable modeling, intraclass correlation coefficients, Metode Markov Chain Monte Carlo THE IMPACT OF SAMPLE SIZE AND INTRACLASS CORRELATION COEFFICIENTS (ICC) ON THE BIAS OF PARAMETER ESTIMATION IN MULTILEVEL LATENT VARIABLE MODELING: A MONTE CARLO STUDYAbstractA monte carlo study was conducted to investigate the effect of sample size and intraclass correlation coefficients (ICC) on the bias of parameter estimates in multilevel latent variable modeling. The design factors included (ICC: 0.05, 0.10, 0.15, 0.20, 0.25), number of groups in between level model (NG: 30, 50, 100 and 150), cluster size (CS: 10, 20 and 50) to be estimated with five different estimator: ML, MLF, MLR, WLSMV and BAYES. Factors were interegated into 300 conditions (4 NG  3 CS  5 ICC  5 Estimator). For each condition, replications with convergence problems were exclude until at least 1.000 replications were generated and analyzed using Mplus 7.4, we also consider absolute percent bias <10% to represent an acceptable level of bias. We find that the degree of bias depends on sample size and ICC. We also show that WLSMV and BAYES estimator performed better than ML-based estimator across varying sample sizes and ICC’s conditions.Keywords: multilevel latent variable modeling, intraclass correlation coefficients, Markov Chain Monte Carlo method


Biometrika ◽  
2020 ◽  
Vol 107 (2) ◽  
pp. 381-395
Author(s):  
Matti Vihola ◽  
Jordan Franks

Summary Approximate Bayesian computation enables inference for complicated probabilistic models with intractable likelihoods using model simulations. The Markov chain Monte Carlo implementation of approximate Bayesian computation is often sensitive to the tolerance parameter: low tolerance leads to poor mixing and large tolerance entails excess bias. We propose an approach that involves using a relatively large tolerance for the Markov chain Monte Carlo sampler to ensure sufficient mixing and post-processing the output, leading to estimators for a range of finer tolerances. We introduce an approximate confidence interval for the related post-corrected estimators and propose an adaptive approximate Bayesian computation Markov chain Monte Carlo algorithm, which finds a balanced tolerance level automatically based on acceptance rate optimization. Our experiments show that post-processing-based estimators can perform better than direct Markov chain Monte Carlo targeting a fine tolerance, that our confidence intervals are reliable, and that our adaptive algorithm leads to reliable inference with little user specification.


1994 ◽  
Author(s):  
Alan E. Gelfand ◽  
Sujit K. Sahu

Sign in / Sign up

Export Citation Format

Share Document