Estimating drift and minorization coefficients for Gibbs sampling algorithms

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
David A. Spade

Abstract Gibbs samplers are common Markov chain Monte Carlo (MCMC) algorithms that are used to sample from intractable probability distributions when sampling directly from full conditional distributions is possible. These types of MCMC algorithms come up frequently in many applications, and because of their popularity it is important to have a sense of how long it takes for the Gibbs sampler to become close to its stationary distribution. To this end, it is common to rely on the values of drift and minorization coefficients to bound the mixing time of the Gibbs sampler. This manuscript provides a computational method for estimating these coefficients. Herein, we detail the several advantages of the proposed methods, as well as the limitations of this approach. These limitations are primarily related to the “curse of dimensionality”, which for these methods is caused by necessary increases in the numbers of initial states from which chains need be run and the need for an exponentially increasing number of grid points for estimation of minorization coefficients.

2019 ◽  
Vol 23 ◽  
pp. 271-309
Author(s):  
Joseph Muré

Models are often defined through conditional rather than joint distributions, but it can be difficult to check whether the conditional distributions are compatible, i.e. whether there exists a joint probability distribution which generates them. When they are compatible, a Gibbs sampler can be used to sample from this joint distribution. When they are not, the Gibbs sampling algorithm may still be applied, resulting in a “pseudo-Gibbs sampler”. We show its stationary probability distribution to be the optimal compromise between the conditional distributions, in the sense that it minimizes a mean squared misfit between them and its own conditional distributions. This allows us to perform Objective Bayesian analysis of correlation parameters in Kriging models by using univariate conditional Jeffreys-rule posterior distributions instead of the widely used multivariate Jeffreys-rule posterior. This strategy makes the full-Bayesian procedure tractable. Numerical examples show it has near-optimal frequentist performance in terms of prediction interval coverage.


1998 ◽  
Vol 12 (3) ◽  
pp. 283-302 ◽  
Author(s):  
James Allen Fill

The elementary problem of exhaustively sampling a finite population without replacement is used as a nonreversible test case for comparing two recently proposed MCMC algorithms for perfect sampling, one based on backward coupling and the other on strong stationary duality. The backward coupling algorithm runs faster in this case, but the duality-based algorithm is unbiased for user impatience. An interesting by-product of the analysis is a new and simple stochastic interpretation of a mixing-time result for the move-to-front rule.


2018 ◽  
Vol 98 (1) ◽  
Author(s):  
Daniel Schmidtke ◽  
Lars Knipschild ◽  
Michele Campisi ◽  
Robin Steinigeweg ◽  
Jochen Gemmer

2018 ◽  
Vol 146 (12) ◽  
pp. 4079-4098 ◽  
Author(s):  
Thomas M. Hamill ◽  
Michael Scheuerer

Abstract Hamill et al. described a multimodel ensemble precipitation postprocessing algorithm that is used operationally by the U.S. National Weather Service (NWS). This article describes further changes that produce improved, reliable, and skillful probabilistic quantitative precipitation forecasts (PQPFs) for single or multimodel prediction systems. For multimodel systems, final probabilities are produced through the linear combination of PQPFs from the constituent models. The new methodology is applied to each prediction system. Prior to adjustment of the forecasts, parametric cumulative distribution functions (CDFs) of model and analyzed climatologies are generated using the previous 60 days’ forecasts and analyses and supplemental locations. The CDFs, which can be stored with minimal disk space, are then used for quantile mapping to correct state-dependent bias for each member. In this stage, the ensemble is also enlarged using a stencil of forecast values from the 5 × 5 surrounding grid points. Different weights and dressing distributions are assigned to the sorted, quantile-mapped members, with generally larger weights for outlying members and broader dressing distributions for members with heavier precipitation. Probability distributions are generated from the weighted sum of the dressing distributions. The NWS Global Ensemble Forecast System (GEFS), the Canadian Meteorological Centre (CMC) global ensemble, and the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecast data are postprocessed for April–June 2016. Single prediction system postprocessed forecasts are generally reliable and skillful. Multimodel PQPFs are roughly as skillful as the ECMWF system alone. Postprocessed guidance was generally more skillful than guidance using the Gamma distribution approach of Scheuerer and Hamill, with coefficients generated from data pooled across the United States.


2003 ◽  
Vol 40 (3) ◽  
pp. 821-825 ◽  
Author(s):  
Damian Clancy ◽  
Philip K. Pollett

For Markov processes on the positive integers with the origin as an absorbing state, Ferrari, Kesten, Martínez and Picco studied the existence of quasi-stationary and limiting conditional distributions by characterizing quasi-stationary distributions as fixed points of a transformation Φ on the space of probability distributions on {1, 2, …}. In the case of a birth–death process, the components of Φ(ν) can be written down explicitly for any given distribution ν. Using this explicit representation, we will show that Φ preserves likelihood ratio ordering between distributions. A conjecture of Kryscio and Lefèvre concerning the quasi-stationary distribution of the SIS logistic epidemic follows as a corollary.


2012 ◽  
Vol 529 ◽  
pp. 585-589
Author(s):  
Wei Shao ◽  
Guo Qing Zhao ◽  
Yu Jie Gai

Gibbs sampler is widely used in Bayesian analysis. But it is often difficult to sample from the full conditional distribution, and this hardly weakens the efficiency of Gibbs sampler. In this paper, we propose to use mixture normal distribution for Gibbs sampler. The mixture normal distribution can approximate the target distribution. So carrying more information from target distribution, the mixture normal distribution tremendously improves the efficiency of Gibbs sampler. Further more, combining with mixture normal method, Hit-and-Run algorithm can also get more efficient sampling results. Simulation results show that Gibbs sampler with mixture normal distribution outperforms other sampling algorithms. The Gibbs sampler with mixture normal distribution can also be applied to explorer the surface of single crystal.


2001 ◽  
Vol 9 (2) ◽  
pp. 243-256 ◽  
Author(s):  
Thomas Schell ◽  
Stefan Wegenkittl

Viewing the selection process in a genetic algorithm as a two-step procedure consisting of the assignment of selection probabilities and the sampling according to this distribution, we employ the χ2 measure as a tool for the analysis of the stochastic properties of the sampling. We are thereby able to compare different selection schemes even in the case that their probability distributions coincide. Introducing a new sampling algorithm with adjustable accuracy and employing two-level test designs enables us to further reveal the intrinsic correlation structures of well-known sampling algorithms. Our methods apply well to integral methods like tournament selection and can be automated.


Optics ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 236-250
Author(s):  
Mahesh N. Jayakody ◽  
Asiri Nanayakkara ◽  
Eliahu Cohen

We theoretically analyze the case of noisy Quantum walks (QWs) by introducing four qubit decoherence models into the coin degree of freedom of linear and cyclic QWs. These models include flipping channels (bit flip, phase flip and bit-phase flip), depolarizing channel, phase damping channel and generalized amplitude damping channel. Explicit expressions for the probability distribution of QWs on a line and on a cyclic path are derived under localized and delocalized initial states. We show that QWs which begin from a delocalized state generate mixture probability distributions, which could give rise to useful algorithmic applications related to data encoding schemes. Specifically, we show how the combination of delocalzed initial states and decoherence can be used for computing the binomial transform of a given set of numbers. However, the sensitivity of QWs to noisy environments may negatively affect various other applications based on QWs.


2015 ◽  
Vol 52 (3) ◽  
pp. 811-825
Author(s):  
Yves Atchadé ◽  
Yizao Wang

In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, is O(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of order O(n-1/2).


Sign in / Sign up

Export Citation Format

Share Document