scholarly journals Markov Chain Confidence Intervals and Biases

2021 ◽  
Vol 11 (1) ◽  
pp. 29
Author(s):  
Yu Hang Jiang ◽  
Tong Liu ◽  
Zhiya Lou ◽  
Jeffrey S. Rosenthal ◽  
Shanshan Shangguan ◽  
...  

We derive explicit asymptotic confidence intervals for any Markov chain Monte Carlo (MCMC) algorithm with finite asymptotic variance, started at any initial state, without requiring a Central Limit Theorem nor reversibility nor geometric ergodicity nor any bias bound. We also derive explicit non-asymptotic confidence intervals assuming bounds on the bias or first moment, or alternatively that the chain starts in stationarity. We relate those non-asymptotic bounds to properties of MCMC bias, and show that polynomially ergodicity implies certain bias bounds. We also apply our results to several numerical examples. It is our hope that these results will provide simple and useful tools for estimating errors of MCMC algorithms when CLTs are not available.

2004 ◽  
Vol 29 (4) ◽  
pp. 461-488 ◽  
Author(s):  
Sandip Sinharay

There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the algorithm. Using the output of an MCMC algorithm that has not converged may lead to incorrect inferences on the problem at hand. The convergence is not one to a point, but that of the distribution of a sequence of generated values to another distribution, and hence is not easy to assess; there is no guaranteed diagnostic tool to determine convergence of an MCMC algorithm in general. This article examines the convergence of MCMC algorithms using a number of convergence diagnostics for two real data examples from psychometrics. Findings from this research have the potential to be useful to researchers using the algorithms. For both the examples, the number of iterations required (suggested by the diagnostics) to be reasonably confident that the MCMC algorithm has converged may be larger than what many practitioners consider to be safe.


Water ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 2092
Author(s):  
Songbai Song ◽  
Yan Kang ◽  
Xiaoyan Song ◽  
Vijay P. Singh

The choice of a probability distribution function and confidence interval of estimated design values have long been of interest in flood frequency analysis. Although the four-parameter exponential gamma (FPEG) distribution has been developed for application in hydrology, its maximum likelihood estimation (MLE)-based parameter estimation method and asymptotic variance of its quantiles have not been well documented. In this study, the MLE method was used to estimate the parameters and confidence intervals of quantiles of the FPEG distribution. This method entails parameter estimation and asymptotic variances of quantile estimators. The parameter estimation consisted of a set of four equations which, after algebraic simplification, were solved using a three dimensional Levenberg-Marquardt algorithm. Based on sample information matrix and Fisher’s expected information matrix, derivatives of the design quantile with respect to the parameters were derived. The method of estimation was applied to annual precipitation data from the Weihe watershed, China and confidence intervals for quantiles were determined. Results showed that the FPEG was a good candidate to model annual precipitation data and can provide guidance for estimating design values


Author(s):  
Michael Hynes

A ubiquitous problem in physics is to determine expectation values of observables associated with a system. This problem is typically formulated as an integration of some likelihood over a multidimensional parameter space. In Bayesian analysis, numerical Markov Chain Monte Carlo (MCMC) algorithms are employed to solve such integrals using a fixed number of samples in the Markov Chain. In general, MCMC algorithms are computationally expensive for large datasets and have difficulties sampling from multimodal parameter spaces. An MCMC implementation that is robust and inexpensive for researchers is desired. Distributed computing systems have shown the potential to act as virtual supercomputers, such as in the SETI@home project in which millions of private computers participate. We propose that a clustered peer-to-peer (P2P) computer network serves as an ideal structure to run Markovian state exchange algorithms such as Parallel Tempering (PT). PT overcomes the difficulty in sampling from multimodal distributions by running multiple chains in parallel with different target distributions andexchanging their states in a Markovian manner. To demonstrate the feasibility of peer-to-peer Parallel Tempering (P2P PT), a simple two-dimensional dataset consisting of two Gaussian signals separated by a region of low probability was used in a Bayesian parameter fitting algorithm. A small connected peer-to-peer network was constructed using separate processes on a linux kernel, and P2P PT was applied to the dataset. These sampling results were compared with those obtained from sampling the parameter space with a single chain. It was found that the single chain was unable to sample both modes effectively, while the P2P PT method explored the target distribution well, visiting both modes approximately equally. Future work will involve scaling to many dimensions and large networks, and convergence conditions with highly heterogeneous computing capabilities of members within the network.


Author(s):  
Aisha Fayomi ◽  
Hamdah Al-Shammari

This paper deals with the problem of parameters estimation of the Exponential-Geometric (EG) distribution based on progressive type-II censored data. It turns out that the maximum likelihood estimators for the distribution parameters have no closed forms, therefore the EM algorithm are alternatively used. The asymptotic variance of the MLEs of the targeted parameters under progressive type-II censoring is computed along with the asymptotic confidence intervals. Finally, a simple numerical example is given to illustrate the obtained results.


Author(s):  
N. Thompson Hobbs ◽  
Mevin B. Hooten

This chapter explains how to implement Bayesian analyses using the Markov chain Monte Carlo (MCMC) algorithm, a set of methods for Bayesian analysis made popular by the seminal paper of Gelfand and Smith (1990). It begins with an explanation of MCMC with a heuristic, high-level treatment of the algorithm, describing its operation in simple terms with a minimum of formalism. In this first part, the chapter explains the algorithm so that all readers can gain an intuitive understanding of how to find the posterior distribution by sampling from it. Next, the chapter offers a somewhat more formal treatment of how MCMC is implemented mathematically. Finally, this chapter discusses implementation of Bayesian models via two routes—by using software and by writing one's own algorithm.


1976 ◽  
Vol 13 (01) ◽  
pp. 49-56 ◽  
Author(s):  
W. D. Ray ◽  
F. Margo

The equilibrium probability distribution over the set of absorbing states of a reducible Markov chain is specified a priori and it is required to obtain the constrained sub-space or feasible region for all possible initial probability distributions over the set of transient states. This is called the inverse problem. It is shown that a feasible region exists for the choice of equilibrium distribution. Two different cases are studied: Case I, where the number of transient states exceeds that of the absorbing states and Case II, the converse. The approach is via the use of generalised inverses and numerical examples are given.


Sign in / Sign up

Export Citation Format

Share Document