Adaptive proposal distribution for random walk Metropolis algorithm

1999 ◽  
Vol 14 (3) ◽  
pp. 375-395 ◽  
Author(s):  
Heikki Haario ◽  
Eero Saksman ◽  
Johanna Tamminen
2018 ◽  
Vol 28 (5) ◽  
pp. 2966-3001 ◽  
Author(s):  
Alexandros Beskos ◽  
Gareth Roberts ◽  
Alexandre Thiery ◽  
Natesh Pillai

2003 ◽  
Vol 40 (1) ◽  
pp. 123-146 ◽  
Author(s):  
G. Fort ◽  
E. Moulines ◽  
G. O. Roberts ◽  
J. S. Rosenthal

In this paper, we consider the random-scan symmetric random walk Metropolis algorithm (RSM) on ℝd. This algorithm performs a Metropolis step on just one coordinate at a time (as opposed to the full-dimensional symmetric random walk Metropolis algorithm, which proposes a transition on all coordinates at once). We present various sufficient conditions implying V-uniform ergodicity of the RSM when the target density decreases either subexponentially or exponentially in the tails.


2003 ◽  
Vol 40 (01) ◽  
pp. 123-146 ◽  
Author(s):  
G. Fort ◽  
E. Moulines ◽  
G. O. Roberts ◽  
J. S. Rosenthal

In this paper, we consider the random-scan symmetric random walk Metropolis algorithm (RSM) on ℝ d . This algorithm performs a Metropolis step on just one coordinate at a time (as opposed to the full-dimensional symmetric random walk Metropolis algorithm, which proposes a transition on all coordinates at once). We present various sufficient conditions implying V-uniform ergodicity of the RSM when the target density decreases either subexponentially or exponentially in the tails.


2019 ◽  
Vol 2019 ◽  
pp. 1-24 ◽  
Author(s):  
Mylène Bédard

We obtain weak convergence and optimal scaling results for the random walk Metropolis algorithm with a Gaussian proposal distribution. The sampler is applied to hierarchical target distributions, which form the building block of many Bayesian analyses. The global asymptotically optimal proposal variance derived may be computed as a function of the specific target distribution considered. We also introduce the concept of locally optimal tunings, i.e., tunings that depend on the current position of the Markov chain. The theorems are proved by studying the generator of the first and second components of the algorithm and verifying their convergence to the generator of a modified RWM algorithm and a diffusion process, respectively. The rate at which the algorithm explores its state space is optimized by studying the speed measure of the limiting diffusion process. We illustrate the theory with two examples. Applications of these results on simulated and real data are also presented.


Sign in / Sign up

Export Citation Format

Share Document