The Shark Attack Problem: The Gamma-Poisson Conjugate

Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces the gamma-Poisson conjugate. Many Bayesian analyses consider alternative parameter values as hypotheses. The prior distribution for an unknown parameter can be represented by a continuous probability density function when the number of hypotheses is infinite. There are special cases where a Bayesian prior probability distribution for an unknown parameter of interest can be quickly updated to a posterior distribution of the same form as the prior. In the “Shark Attack Problem,” a gamma distribution is used as the prior distribution of λ‎, the mean number of shark attacks in a given year. Poisson data are then collected to determine the number of attacks in a given year. The prior distribution is updated to the posterior distribution in light of this new information. In short, a gamma prior distribution + Poisson data → gamma posterior distribution. The gamma distribution is said to be “conjugate to” the Poisson distribution.

Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

In this chapter, Bayesian methods are used to estimate the two parameters that identify a normal distribution, μ‎ and σ‎. Many Bayesian analyses consider alternative parameter values as hypotheses. The prior distribution for an unknown parameter can be represented by a continuous probability density function when the number of hypotheses is infinite. In the “Maple Syrup Problem,” a normal distribution is used as the prior distribution of μ‎, the mean number of millions of gallons of maple syrup produced in Vermont in a year. The amount of syrup produced in multiple years is determined, and assumed to follow a normal distribution with known σ‎. The prior distribution is updated to the posterior distribution in light of this new information. In short, a normal prior distribution + normally distributed data → normal posterior distribution.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces the beta-binomial conjugate. There are special cases where a Bayesian prior probability distribution for an unknown parameter of interest can be quickly updated to a posterior distribution of the same form as the prior. In the “White House Problem,” a beta distribution is used to set the priors for all hypotheses of p, the probability that a famous person can get into the White House without an invitation. Binomial data are then collected, and provide the number of times a famous person gained entry out of a fixed number of attempts. The prior distribution is updated to a posterior distribution (also a beta distribution) in light of this new information. In short, a beta prior distribution for the unknown parameter + binomial data → beta posterior distribution for the unknown parameter, p. The beta distribution is said to be “conjugate to” the binomial distribution.


2021 ◽  
Vol 10 (3) ◽  
pp. 413-422
Author(s):  
Nur Azizah ◽  
Sugito Sugito ◽  
Hasbi Yasin

Hospital service facilities cannot be separated from queuing events. Queues are an unavoidable part of life, but they can be minimized with a good system. The purpose of this study was to find out how the queuing system at Dr. Kariadi. Bayesian method is used to combine previous research and this research in order to obtain new information. The sample distribution and prior distribution obtained from previous studies are combined with the sample likelihood function to obtain a posterior distribution. After calculating the posterior distribution, it was found that the queuing model in the outpatient installation at Dr. Kariadi Semarang is (G/G/c): (GD/∞/∞) where each polyclinic has met steady state conditions and the level of busyness is greater than the unemployment rate so that the queuing system at Dr. Kariadi is categorized as good, except in internal medicine poly. 


Author(s):  
Bashiru Omeiza Sule ◽  
Taiwo Mobolaji Adegoke

Aims: This study aimed to obtain the shape parameter of an Exponential Inverted Exponential distribution using different prior distributions under different loss functions. Methodology: The Bayes’ theorem was adopted to obtain the posterior distribution of the shape parameter of an Exponential inverted Exponential distribution for both non-information prior (such as Jeffreys prior, Hartigen prior and Uniform prior) and an informative prior (such as Gamma distribution and chi-square distribution). Different loss functions (such as Entropy loss function, Square error loss function, Al-Bayyati’s loss function and Precautionary loss function) were employed to obtain the estimate parameter of the shape parameter with an assumption that the scale parameter is known. Results: The posterior distribution of the shape parameter of an Exponential Inverted Exponential distribution follows a Gamma distribution for all the prior distribution in the study. Also the Bayes estimate for the simulated datasets and real life dataset were obtained. Conclusion: The Bayes’ estimates for different prior distribution under different loss functions are close to the true parameter value of the shape parameter. The estimators are then compared in terms of their Mean Square Error (MSE) which is computed using R programming language. We deduce that the MSE reduces as the sample size (n) increases.


2019 ◽  
Author(s):  
Johnny van Doorn ◽  
Dora Matzke ◽  
Eric-Jan Wagenmakers

Sir Ronald Fisher's venerable experiment "The Lady Tasting Tea'' is revisited from a Bayesian perspective. We demonstrate how a similar tasting experiment, conducted in a classroom setting, can familiarize students with several key concepts of Bayesian inference, such as the prior distribution, the posterior distribution, the Bayes factor, and sequential analysis.


1978 ◽  
Vol 3 (2) ◽  
pp. 179-188
Author(s):  
Robert K. Tsutakawa

The comparison of two regression lines is often meaningful or of interest over a finite interval I of the independent variable. When the prior distribution of the parameters is a natural conjugate, the posterior distribution of the distances between two regression lines at the end points of I is bivariate t. The posterior probability that one regression line lies above the other uniformly over I is numerically evaluated using this distribution.


Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

This chapter talks about the most widely used method to generate draws from posterior distributions of a DSGE model: the random walk MH (RWMH) algorithm. The DSGE model likelihood function in combination with the prior distribution leads to a posterior distribution that has a fairly regular elliptical shape. In turn, the draws from a simple RWMH algorithm can be used to obtain an accurate numerical approximation of posterior moments. However, in many other applications, particularly those involving medium- and large-scale DSGE models, the posterior distributions could be very non-elliptical. Irregularly shaped posterior distributions are often caused by identification problems or misspecification. In lieu of the difficulties caused by irregularly shaped posterior surfaces, the chapter reviews various alternative MH samplers, which use alternative proposal distributions.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces Markov Chain Monte Carlo (MCMC) with Gibbs sampling, revisiting the “Maple Syrup Problem” of Chapter 12, where the goal was to estimate the two parameters of a normal distribution, μ‎ and σ‎. Chapter 12 used the normal-normal conjugate to derive the posterior distribution for the unknown parameter μ‎; the parameter σ‎ was assumed to be known. This chapter uses MCMC with Gibbs sampling to estimate the joint posterior distribution of both μ‎ and σ‎. Gibbs sampling is a special case of the Metropolis–Hastings algorithm. The chapter describes MCMC with Gibbs sampling step by step, which requires (1) computing the posterior distribution of a given parameter, conditional on the value of the other parameter, and (2) drawing a sample from the posterior distribution. In this chapter, Gibbs sampling makes use of the conjugate solutions to decompose the joint posterior distribution into full conditional distributions for each parameter.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

In this chapter, the “Shark Attack Problem” (Chapter 11) is revisited. Markov Chain Monte Carlo (MCMC) is introduced as another way to determine a posterior distribution of λ‎, the mean number of shark attacks per year. The MCMC approach is so versatile that it can be used to solve almost any kind of parameter estimation problem. The chapter highlights the Metropolis algorithm in detail and illustrates its application, step by step, for the “Shark Attack Problem.” The posterior distribution generated in Chapter 11 using the gamma-Poisson conjugate is compared with the MCMC posterior distribution to show how successful the MCMC method can be. By the end of the chapter, the reader should also understand the following concepts: tuning parameter, MCMC inference, traceplot, and moment matching.


Sign in / Sign up

Export Citation Format

Share Document