independence sampler
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 1)

Author(s):  
Chris Sherlock ◽  
Anthony Lee

AbstractA delayed-acceptance version of a Metropolis–Hastings algorithm can be useful for Bayesian inference when it is computationally expensive to calculate the true posterior, but a computationally cheap approximation is available; the delayed-acceptance kernel targets the same posterior as its associated “parent” Metropolis-Hastings kernel. Although the asymptotic variance of the ergodic average of any functional of the delayed-acceptance chain cannot be less than that obtained using its parent, the average computational time per iteration can be much smaller and so for a given computational budget the delayed-acceptance kernel can be more efficient. When the asymptotic variance of the ergodic averages of all $$L^2$$ L 2 functionals of the chain are finite, the kernel is said to be variance bounding. It has recently been noted that a delayed-acceptance kernel need not be variance bounding even when its parent is. We provide sufficient conditions for inheritance: for non-local algorithms, such as the independence sampler, the discrepancy between the log density of the approximation and that of the truth should be bounded; for local algorithms, two alternative sets of conditions are provided. As a by-product of our initial, general result we also supply sufficient conditions on any pair of proposals such that, for any shared target distribution, if a Metropolis-Hastings kernel using one of the proposals is variance bounding then so is the Metropolis-Hastings kernel using the other proposal.


2021 ◽  
Vol 31 (1) ◽  
Author(s):  
Kitty Yuen Yi Wan ◽  
Jim E. Griffin

AbstractBayesian variable selection is an important method for discovering variables which are most useful for explaining the variation in a response. The widespread use of this method has been restricted by the challenging computational problem of sampling from the corresponding posterior distribution. Recently, the use of adaptive Monte Carlo methods has been shown to lead to performance improvement over traditionally used algorithms in linear regression models. This paper looks at applying one of these algorithms (the adaptively scaled independence sampler) to logistic regression and accelerated failure time models. We investigate the use of this algorithm with data augmentation, Laplace approximation and the correlated pseudo-marginal method. The performance of the algorithms is compared on several genomic data sets.


2019 ◽  
Vol 14 (3) ◽  
pp. 777-803 ◽  
Author(s):  
Ioannis Ntzoufras ◽  
Claudia Tarantola ◽  
Monia Lupparelli

Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

The “White House Problem” of Chapter 10 is revisited in this chapter. Markov Chain Monte Carlo (MCMC) is used to build the posterior distribution of the unknown parameter p, the probability that a famous person could gain access to the White House without invitation. The chapter highlights the Metropolis–Hastings algorithm in MCMC analysis, describing the process step by step. The posterior distribution generated in Chapter 10 using the beta-binomial conjugate is compared with the MCMC posterior distribution to show how successful the MCMC method can be. By the end of this chapter, the reader will have a firm understanding of the following concepts: Monte Carlo, Markov chain, Metropolis–Hastings algorithm, Metropolis–Hastings random walk, and Metropolis–Hastings independence sampler.


Bernoulli ◽  
2018 ◽  
Vol 24 (3) ◽  
pp. 1636-1652 ◽  
Author(s):  
Clement Lee ◽  
Peter Neal

2008 ◽  
Vol 08 (03) ◽  
pp. 319-350 ◽  
Author(s):  
ALEXANDROS BESKOS ◽  
GARETH ROBERTS ◽  
ANDREW STUART ◽  
JOCHEN VOSS

We present and study a Langevin MCMC approach for sampling nonlinear diffusion bridges. The method is based on recent theory concerning stochastic partial differential equations (SPDEs) reversible with respect to the target bridge, derived by applying the Langevin idea on the bridge pathspace. In the process, a Random-Walk Metropolis algorithm and an Independence Sampler are also obtained. The novel algorithmic idea of the paper is that proposed moves for the MCMC algorithm are determined by discretising the SPDEs in the time direction using an implicit scheme, parametrised by θ ∈ [0,1]. We show that the resulting infinite-dimensional MCMC sampler is well-defined only if θ = 1/2, when the MCMC proposals have the correct quadratic variation. Previous Langevin-based MCMC methods used explicit schemes, corresponding to θ = 0. The significance of the choice θ = 1/2 is inherited by the finite-dimensional approximation of the algorithm used in practice. We present numerical results illustrating the phenomenon and the theory that explains it. Diffusion bridges (with additive noise) are representative of the family of laws defined as a change of measure from Gaussian distributions on arbitrary separable Hilbert spaces; the analysis in this paper can be readily extended to target laws from this family and an example from signal processing illustrates this fact.


1999 ◽  
Vol 36 (04) ◽  
pp. 1210-1217 ◽  
Author(s):  
G. O. Roberts

This paper considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis–Hastings algorithms which are constructed in terms of a rejection probability (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis-adjusted Langevin algorithm. The examples are rather specialized, although, in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used.


1999 ◽  
Vol 36 (4) ◽  
pp. 1210-1217 ◽  
Author(s):  
G. O. Roberts

This paper considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis–Hastings algorithms which are constructed in terms of a rejection probability (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis-adjusted Langevin algorithm. The examples are rather specialized, although, in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used.


Sign in / Sign up

Export Citation Format

Share Document