marginal likelihoods
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 18)

H-INDEX

10
(FIVE YEARS 4)

PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e12438
Author(s):  
Sebastian Höhna ◽  
Michael J. Landis ◽  
John P. Huelsenbeck

In Bayesian phylogenetic inference, marginal likelihoods can be estimated using several different methods, including the path-sampling or stepping-stone-sampling algorithms. Both algorithms are computationally demanding because they require a series of power posterior Markov chain Monte Carlo (MCMC) simulations. Here we introduce a general parallelization strategy that distributes the power posterior MCMC simulations and the likelihood computations over available CPUs. Our parallelization strategy can easily be applied to any statistical model despite our primary focus on molecular substitution models in this study. Using two phylogenetic example datasets, we demonstrate that the runtime of the marginal likelihood estimation can be reduced significantly even if only two CPUs are available (an average performance increase of 1.96x). The performance increase is nearly linear with the number of available CPUs. We record a performance increase of 13.3x for cluster nodes with 16 CPUs, representing a substantial reduction to the runtime of marginal likelihood estimations. Hence, our parallelization strategy enables the estimation of marginal likelihoods to complete in a feasible amount of time which previously needed days, weeks or even months. The methods described here are implemented in our open-source software RevBayes which is available from http://www.RevBayes.com.


2021 ◽  
Vol 18 ◽  
pp. 119-125
Author(s):  
Karl Stessy Bisselou ◽  
Gleb Haynatzki

Time-to-event coupled with longitudinal trajectories are often of interest in biomedicine, and one popular approach to analysing such data is with a Joint Model (JM). JMs often have intractable marginal likelihoods, and one way to tackle this issue is by using the hierarchical likelihood (HL) estimation approach by Lee and Nelder [12]. The HL approximation sometimes results in biased estimates, and we propose a biascorrection approach (C-HL) that has been used for other models (eg, frailty models). We have applied, for the first time, the C-HL in the context of joint modelling of time-to-event and repeated measures data. Our C-HL method shows efficiency improvement, which comes at a cost of a more expensive computation than the existing HL approach. Additionally, we illustrate our method with a new MIMIC-IV CAP dataset


Author(s):  
Shuangshuang Chen ◽  
Sihao Ding ◽  
Yiannis Karayiannidis ◽  
Mårten Björkman

Learning generative models and inferring latent trajectories have shown to be challenging for time series due to the intractable marginal likelihoods of flexible generative models. It can be addressed by surrogate objectives for optimization. We propose Monte Carlo filtering objectives (MCFOs), a family of variational objectives for jointly learning parametric generative models and amortized adaptive importance proposals of time series. MCFOs extend the choices of likelihood estimators beyond Sequential Monte Carlo in state-of-the-art objectives, possess important properties revealing the factors for the tightness of objectives, and allow for less biased and variant gradient estimates. We demonstrate that the proposed MCFOs and gradient estimations lead to efficient and stable model learning, and learned generative models well explain data and importance proposals are more sample efficient on various kinds of time series data.


2020 ◽  
Vol 26 (3) ◽  
pp. 205-221
Author(s):  
Johannes Reichl

AbstractThis article develops a new estimator of the marginal likelihood that requires only a sample of the posterior distribution as the input from the analyst. This sample may come from any sampling scheme, such as Gibbs sampling or Metropolis–Hastings sampling. The presented approach can be implemented generically in almost any application of Bayesian modeling and significantly decreases the computational burdens associated with marginal likelihood estimation compared to existing techniques. The functionality of this method is demonstrated in the context of probit and logit regressions, on two mixtures of normals models, and also on a high-dimensional random intercept probit. Simulation results show that the simple approach presented here achieves excellent stability in low-dimensional models, and also clearly outperforms existing methods when the number of coefficients in the model increases.


2020 ◽  
Vol 493 (3) ◽  
pp. 3132-3158 ◽  
Author(s):  
Joshua S Speagle

ABSTRACT We present dynesty, a public, open-source, python package to estimate Bayesian posteriors and evidences (marginal likelihoods) using the dynamic nested sampling methods developed by Higson et al. By adaptively allocating samples based on posterior structure, dynamic nested sampling has the benefits of Markov chain Monte Carlo (MCMC) algorithms that focus exclusively on posterior estimation while retaining nested sampling’s ability to estimate evidences and sample from complex, multimodal distributions. We provide an overview of nested sampling, its extension to dynamic nested sampling, the algorithmic challenges involved, and the various approaches taken to solve them in this and previous work. We then examine dynesty’s performance on a variety of toy problems along with several astronomical applications. We find in particular problems dynesty can provide substantial improvements in sampling efficiency compared to popular MCMC approaches in the astronomical literature. More detailed statistical results related to nested sampling are also included in the appendix.


2020 ◽  
Vol 49 (1) ◽  
pp. 244-263
Author(s):  
Yu-Bo Wang ◽  
Ming-Hui Chen ◽  
Wei Shi ◽  
Paul Lewis ◽  
Lynn Kuo

Sign in / Sign up

Export Citation Format

Share Document