scholarly journals Metropolis-Hastings Algorithm with Delayed Acceptance and Rejection

2019 ◽  
Vol 2 (2) ◽  
pp. 7
Author(s):  
Yulin Hu ◽  
Yayong Tang

Metropolis-Hastings algorithms are slowed down by the computation of complex target distributions. To solve this problem, one can use the delayed acceptance Metropolis-Hastings algorithm (MHDA) of Christen and Fox (2005). However, the acceptance rate of a proposed value will always be less than in the standard Metropolis-Hastings. We can fix this problem by using the Metropolis-Hastings algorithm with delayed rejection (MHDR) proposed by Tierney and Mira (1999). In this paper, we combine the ideas of MHDA and MHDR to propose a new MH algorithm, named the Metropolis-Hastings algorithm with delayed acceptance and rejection (MHDAR). The new algorithm reduces the computational cost by division of the prior or likelihood functions and increase the acceptance probability by delay rejection of the second stage. We illustrate those accelerating features by a realistic example.

Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 258
Author(s):  
Zhihang Xu ◽  
Qifeng Liao

Optimal experimental design (OED) is of great significance in efficient Bayesian inversion. A popular choice of OED methods is based on maximizing the expected information gain (EIG), where expensive likelihood functions are typically involved. To reduce the computational cost, in this work, a novel double-loop Bayesian Monte Carlo (DLBMC) method is developed to efficiently compute the EIG, and a Bayesian optimization (BO) strategy is proposed to obtain its maximizer only using a small number of samples. For Bayesian Monte Carlo posed on uniform and normal distributions, our analysis provides explicit expressions for the mean estimates and the bounds of their variances. The accuracy and the efficiency of our DLBMC and BO based optimal design are validated and demonstrated with numerical experiments.


2019 ◽  
Author(s):  
Evgeny Tankhilevich ◽  
Jonathan Ish-Horowicz ◽  
Tara Hameed ◽  
Elisabeth Roesch ◽  
Istvan Kleijn ◽  
...  

ABSTRACTApproximate Bayesian computation (ABC) is an important framework within which to infer the structure and parameters of a systems biology model. It is especially suitable for biological systems with stochastic and nonlinear dynamics, for which the likelihood functions are intractable. However, the associated computational cost often limits ABC to models that are relatively quick to simulate in practice. We here present a Julia package, GpABC, that implements parameter inference and model selection for deterministic or stochastic models using i) standard rejection ABC or ABC-SMC, or ii) ABC with Gaussian process emulation. The latter significantly reduces the computational cost.URL: https://github.com/tanhevg/GpABC.jl


Acta Numerica ◽  
2018 ◽  
Vol 27 ◽  
pp. 113-206 ◽  
Author(s):  
Nawaf Bou-Rabee ◽  
J. M. Sanz-Serna

This paper surveys in detail the relations between numerical integration and the Hamiltonian (or hybrid) Monte Carlo method (HMC). Since the computational cost of HMC mainly lies in the numerical integrations, these should be performed as efficiently as possible. However, HMC requires methods that have the geometric properties of being volume-preserving and reversible, and this limits the number of integrators that may be used. On the other hand, these geometric properties have important quantitative implications for the integration error, which in turn have an impact on the acceptance rate of the proposal. While at present the velocity Verlet algorithm is the method of choice for good reasons, we argue that Verlet can be improved upon. We also discuss in detail the behaviour of HMC as the dimensionality of the target distribution increases.


Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

This chapter argues that in order to conduct Bayesian inference, the approximate likelihood function has to be embedded into a posterior sampler. It begins by combining the particle filtering methods with the MCMC methods, replacing the actual likelihood functions that appear in the formula for the acceptance probability in Algorithm 5 with particle filter approximations. The chapter refers to the resulting algorithm as PFMH algorithm. It is a special case of a larger class of algorithms called particle Markov chain Monte Carlo (PMCMC). The theoretical properties of PMCMC methods were established in Andrieu, Doucet, and Holenstein (2010). Applications of PFMH algorithms in other areas of econometrics are discussed in Flury and Shephard (2011).


1999 ◽  
Vol 36 (04) ◽  
pp. 1210-1217 ◽  
Author(s):  
G. O. Roberts

This paper considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis–Hastings algorithms which are constructed in terms of a rejection probability (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis-adjusted Langevin algorithm. The examples are rather specialized, although, in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used.


2020 ◽  
Author(s):  
Alexander Fengler ◽  
Lakshmi N. Govindarajan ◽  
Tony Chen ◽  
Michael J. Frank

AbstractIn cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.


2020 ◽  
Vol 36 (10) ◽  
pp. 3286-3287 ◽  
Author(s):  
Evgeny Tankhilevich ◽  
Jonathan Ish-Horowicz ◽  
Tara Hameed ◽  
Elisabeth Roesch ◽  
Istvan Kleijn ◽  
...  

Abstract Motivation Approximate Bayesian computation (ABC) is an important framework within which to infer the structure and parameters of a systems biology model. It is especially suitable for biological systems with stochastic and nonlinear dynamics, for which the likelihood functions are intractable. However, the associated computational cost often limits ABC to models that are relatively quick to simulate in practice. Results We here present a Julia package, GpABC, that implements parameter inference and model selection for deterministic or stochastic models using (i) standard rejection ABC or sequential Monte Carlo ABC or (ii) ABC with Gaussian process emulation. The latter significantly reduces the computational cost. Availability and implementation https://github.com/tanhevg/GpABC.jl.


2020 ◽  
Author(s):  
Abhisha Mano

AbstractThe segmentation of anterior Lamina Cribrosa surface from the OCT image is an essential task for analysis of glaucomatous damage. A Bayesian method is used to segment LC surface whereas prior knowledge about shape and position of LC layer is obtained by the non local Markov Random field and K-means segmentation. The Metropolis-Hastings (MH) algorithm provides autocorrelation graph and distribution of samples from a probability distribution. By using this technique acceptance probability is calculated. Finally, the LC layer is analysed whether it is normal or abnormal. This technique provides an accuracy of 96.7%


Author(s):  
Diogo Machado ◽  
Rui Carvalho ◽  
Pedro Brandão

Diabetes is a chronic disease requiring a strict management. MyDiabetes is a mobile application for type I diabetes management that, as other mHealth applications, faces the challenge of user adherence and motivation. Here the authors describe the application's redesign and the implementation of different gamification techniques to tackle these challenges. The transition to the current version of the application was made in two stages. The first addressed the redesign of the application and started implementing gamification techniques. The second stage improved some of the features and added others. After the second stage, a new survey was conducted to evaluate the implemented features and improvements. While objectives and incentives to increase the number of records were endorsed by 56.5% of users, health directed badges and objectives increased the acceptance rate to 91.3%. Long-term effectiveness of the gamification approach will be done in the future.


1999 ◽  
Vol 36 (4) ◽  
pp. 1210-1217 ◽  
Author(s):  
G. O. Roberts

This paper considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis–Hastings algorithms which are constructed in terms of a rejection probability (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis-adjusted Langevin algorithm. The examples are rather specialized, although, in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used.


Sign in / Sign up

Export Citation Format

Share Document