The White House Problem: The Beta-Binomial Conjugate

Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces the beta-binomial conjugate. There are special cases where a Bayesian prior probability distribution for an unknown parameter of interest can be quickly updated to a posterior distribution of the same form as the prior. In the “White House Problem,” a beta distribution is used to set the priors for all hypotheses of p, the probability that a famous person can get into the White House without an invitation. Binomial data are then collected, and provide the number of times a famous person gained entry out of a fixed number of attempts. The prior distribution is updated to a posterior distribution (also a beta distribution) in light of this new information. In short, a beta prior distribution for the unknown parameter + binomial data → beta posterior distribution for the unknown parameter, p. The beta distribution is said to be “conjugate to” the binomial distribution.

Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces the gamma-Poisson conjugate. Many Bayesian analyses consider alternative parameter values as hypotheses. The prior distribution for an unknown parameter can be represented by a continuous probability density function when the number of hypotheses is infinite. There are special cases where a Bayesian prior probability distribution for an unknown parameter of interest can be quickly updated to a posterior distribution of the same form as the prior. In the “Shark Attack Problem,” a gamma distribution is used as the prior distribution of λ‎, the mean number of shark attacks in a given year. Poisson data are then collected to determine the number of attacks in a given year. The prior distribution is updated to the posterior distribution in light of this new information. In short, a gamma prior distribution + Poisson data → gamma posterior distribution. The gamma distribution is said to be “conjugate to” the Poisson distribution.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

In this chapter, Bayesian methods are used to estimate the two parameters that identify a normal distribution, μ‎ and σ‎. Many Bayesian analyses consider alternative parameter values as hypotheses. The prior distribution for an unknown parameter can be represented by a continuous probability density function when the number of hypotheses is infinite. In the “Maple Syrup Problem,” a normal distribution is used as the prior distribution of μ‎, the mean number of millions of gallons of maple syrup produced in Vermont in a year. The amount of syrup produced in multiple years is determined, and assumed to follow a normal distribution with known σ‎. The prior distribution is updated to the posterior distribution in light of this new information. In short, a normal prior distribution + normally distributed data → normal posterior distribution.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

The “White House Problem” of Chapter 10 is revisited in this chapter. Markov Chain Monte Carlo (MCMC) is used to build the posterior distribution of the unknown parameter p, the probability that a famous person could gain access to the White House without invitation. The chapter highlights the Metropolis–Hastings algorithm in MCMC analysis, describing the process step by step. The posterior distribution generated in Chapter 10 using the beta-binomial conjugate is compared with the MCMC posterior distribution to show how successful the MCMC method can be. By the end of this chapter, the reader will have a firm understanding of the following concepts: Monte Carlo, Markov chain, Metropolis–Hastings algorithm, Metropolis–Hastings random walk, and Metropolis–Hastings independence sampler.


2021 ◽  
Vol 10 (3) ◽  
pp. 413-422
Author(s):  
Nur Azizah ◽  
Sugito Sugito ◽  
Hasbi Yasin

Hospital service facilities cannot be separated from queuing events. Queues are an unavoidable part of life, but they can be minimized with a good system. The purpose of this study was to find out how the queuing system at Dr. Kariadi. Bayesian method is used to combine previous research and this research in order to obtain new information. The sample distribution and prior distribution obtained from previous studies are combined with the sample likelihood function to obtain a posterior distribution. After calculating the posterior distribution, it was found that the queuing model in the outpatient installation at Dr. Kariadi Semarang is (G/G/c): (GD/∞/∞) where each polyclinic has met steady state conditions and the level of busyness is greater than the unemployment rate so that the queuing system at Dr. Kariadi is categorized as good, except in internal medicine poly. 


Author(s):  
X. Shi ◽  
Q. H. Zhao

For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.


2012 ◽  
Vol 44 (3) ◽  
pp. 842-873 ◽  
Author(s):  
Zhiyi Chi

Nonnegative infinitely divisible (i.d.) random variables form an important class of random variables. However, when this type of random variable is specified via Lévy densities that have infinite integrals on (0, ∞), except for some special cases, exact sampling is unknown. We present a method that can sample a rather wide range of such i.d. random variables. A basic result is that, for any nonnegative i.d. random variable X with its Lévy density explicitly specified, if its distribution conditional on X ≤ r can be sampled exactly, where r > 0 is any fixed number, then X can be sampled exactly using rejection sampling, without knowing the explicit expression of the density of X. We show that variations of the result can be used to sample various nonnegative i.d. random variables.


2019 ◽  
Author(s):  
Johnny van Doorn ◽  
Dora Matzke ◽  
Eric-Jan Wagenmakers

Sir Ronald Fisher's venerable experiment "The Lady Tasting Tea'' is revisited from a Bayesian perspective. We demonstrate how a similar tasting experiment, conducted in a classroom setting, can familiarize students with several key concepts of Bayesian inference, such as the prior distribution, the posterior distribution, the Bayes factor, and sequential analysis.


1978 ◽  
Vol 3 (2) ◽  
pp. 179-188
Author(s):  
Robert K. Tsutakawa

The comparison of two regression lines is often meaningful or of interest over a finite interval I of the independent variable. When the prior distribution of the parameters is a natural conjugate, the posterior distribution of the distances between two regression lines at the end points of I is bivariate t. The posterior probability that one regression line lies above the other uniformly over I is numerically evaluated using this distribution.


Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

This chapter talks about the most widely used method to generate draws from posterior distributions of a DSGE model: the random walk MH (RWMH) algorithm. The DSGE model likelihood function in combination with the prior distribution leads to a posterior distribution that has a fairly regular elliptical shape. In turn, the draws from a simple RWMH algorithm can be used to obtain an accurate numerical approximation of posterior moments. However, in many other applications, particularly those involving medium- and large-scale DSGE models, the posterior distributions could be very non-elliptical. Irregularly shaped posterior distributions are often caused by identification problems or misspecification. In lieu of the difficulties caused by irregularly shaped posterior surfaces, the chapter reviews various alternative MH samplers, which use alternative proposal distributions.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces Markov Chain Monte Carlo (MCMC) with Gibbs sampling, revisiting the “Maple Syrup Problem” of Chapter 12, where the goal was to estimate the two parameters of a normal distribution, μ‎ and σ‎. Chapter 12 used the normal-normal conjugate to derive the posterior distribution for the unknown parameter μ‎; the parameter σ‎ was assumed to be known. This chapter uses MCMC with Gibbs sampling to estimate the joint posterior distribution of both μ‎ and σ‎. Gibbs sampling is a special case of the Metropolis–Hastings algorithm. The chapter describes MCMC with Gibbs sampling step by step, which requires (1) computing the posterior distribution of a given parameter, conditional on the value of the other parameter, and (2) drawing a sample from the posterior distribution. In this chapter, Gibbs sampling makes use of the conjugate solutions to decompose the joint posterior distribution into full conditional distributions for each parameter.


Sign in / Sign up

Export Citation Format

Share Document