Bayesian Comparison of Two Regression Lines

1978 ◽  
Vol 3 (2) ◽  
pp. 179-188
Author(s):  
Robert K. Tsutakawa

The comparison of two regression lines is often meaningful or of interest over a finite interval I of the independent variable. When the prior distribution of the parameters is a natural conjugate, the posterior distribution of the distances between two regression lines at the end points of I is bivariate t. The posterior probability that one regression line lies above the other uniformly over I is numerically evaluated using this distribution.

2012 ◽  
Vol 49 (1) ◽  
pp. 114-136 ◽  
Author(s):  
Bruno Jedynak ◽  
Peter I. Frazier ◽  
Raphael Sznitman

We consider the problem of twenty questions with noisy answers, in which we seek to find a target by repeatedly choosing a set, asking an oracle whether the target lies in this set, and obtaining an answer corrupted by noise. Starting with a prior distribution on the target's location, we seek to minimize the expected entropy of the posterior distribution. We formulate this problem as a dynamic program and show that any policy optimizing the one-step expected reduction in entropy is also optimal over the full horizon. Two such Bayes optimal policies are presented: one generalizes the probabilistic bisection policy due to Horstein and the other asks a deterministic set of questions. We study the structural properties of the latter, and illustrate its use in a computer vision application.


2012 ◽  
Vol 49 (01) ◽  
pp. 114-136 ◽  
Author(s):  
Bruno Jedynak ◽  
Peter I. Frazier ◽  
Raphael Sznitman

We consider the problem of twenty questions with noisy answers, in which we seek to find a target by repeatedly choosing a set, asking an oracle whether the target lies in this set, and obtaining an answer corrupted by noise. Starting with a prior distribution on the target's location, we seek to minimize the expected entropy of the posterior distribution. We formulate this problem as a dynamic program and show that any policy optimizing the one-step expected reduction in entropy is also optimal over the full horizon. Two such Bayes optimal policies are presented: one generalizes the probabilistic bisection policy due to Horstein and the other asks a deterministic set of questions. We study the structural properties of the latter, and illustrate its use in a computer vision application.


2021 ◽  
Author(s):  
Ernest Chiedoziam Agwamba ◽  
Lawal G. Hassan ◽  
Achor Muhammad ◽  
Abdullahi M. Sokoto ◽  
Eric N. Agwamba

Abstract This investigation involves study of independent variable that influences the Young modulus of thermoplastic mango starch (TPS) as dependent response factor. The experiment was design using the Taguchi orthogonal technique with four independent variables; plasticiser type; glycerol (G), and Triethanolamine-(TEA) (T), percentage plasticiser (40 and 120 %), percentage carboxymethyl cellulose-CMC (10 and 50 %), and concentration of HCl (0.05 and 0.15 M). The result of the main effect plots for mean indicated that the gTPS-CMC1 with 268.85a MPa is a better outcome compared to gTPS-CMC3 with 280.31a MPa, since no significance difference was observed due to less composition requirement of CMC for gTPS–CMC1, making it more cost effective to produced with better optimum conditions. The interaction plot of the independent variables showed that for plasticiser types; when glycerol (G) was utilised a higher young modulus is observed than TEA (T) and only interacts with TEA (T) at 0.015 M HCl; 10 % CMC gives a higher response compared to 50 % CMC and showed no interaction even as the other independent variables fluctuates, and similar effect was observed for percentage plasticiser. Study concluded that the predicted mean (young modulus) is substantially consistent with the experimental observation (R2 = 0.6283).


2019 ◽  
Author(s):  
Johnny van Doorn ◽  
Dora Matzke ◽  
Eric-Jan Wagenmakers

Sir Ronald Fisher's venerable experiment "The Lady Tasting Tea'' is revisited from a Bayesian perspective. We demonstrate how a similar tasting experiment, conducted in a classroom setting, can familiarize students with several key concepts of Bayesian inference, such as the prior distribution, the posterior distribution, the Bayes factor, and sequential analysis.


2020 ◽  
Vol 2 (2) ◽  
pp. 233-248
Author(s):  
Abednego Stephen ◽  
◽  
Athluna Canthika ◽  
Davin Subrata ◽  
Devina Veronika ◽  
...  

Advertisement is one of the most common way to promote and create awareness of a product. However it is still uncertain to measure the effect of advertisement, especially on customer’s buying decision. The objective of this paper is to identify how much advertisement impacts on consumers buying decision. The research uses quantitative analysis by analyzing online survey data gathered from 280 respondent across Jabodetabek (Jakarta, Bogor, Depok, Tangerang, Bekasi). Statistical method such as correlation analysis, descriptive statistics, and regression analysis was used and the result from 244 valid respondents showed that the independent variable brand recall and stimulation have an impact on consumers’ buying decision while the other three variables which are necessity, pleasure, and dominance do not have an impact on consumers’ buying decision.


Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

This chapter talks about the most widely used method to generate draws from posterior distributions of a DSGE model: the random walk MH (RWMH) algorithm. The DSGE model likelihood function in combination with the prior distribution leads to a posterior distribution that has a fairly regular elliptical shape. In turn, the draws from a simple RWMH algorithm can be used to obtain an accurate numerical approximation of posterior moments. However, in many other applications, particularly those involving medium- and large-scale DSGE models, the posterior distributions could be very non-elliptical. Irregularly shaped posterior distributions are often caused by identification problems or misspecification. In lieu of the difficulties caused by irregularly shaped posterior surfaces, the chapter reviews various alternative MH samplers, which use alternative proposal distributions.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces Markov Chain Monte Carlo (MCMC) with Gibbs sampling, revisiting the “Maple Syrup Problem” of Chapter 12, where the goal was to estimate the two parameters of a normal distribution, μ‎ and σ‎. Chapter 12 used the normal-normal conjugate to derive the posterior distribution for the unknown parameter μ‎; the parameter σ‎ was assumed to be known. This chapter uses MCMC with Gibbs sampling to estimate the joint posterior distribution of both μ‎ and σ‎. Gibbs sampling is a special case of the Metropolis–Hastings algorithm. The chapter describes MCMC with Gibbs sampling step by step, which requires (1) computing the posterior distribution of a given parameter, conditional on the value of the other parameter, and (2) drawing a sample from the posterior distribution. In this chapter, Gibbs sampling makes use of the conjugate solutions to decompose the joint posterior distribution into full conditional distributions for each parameter.


2011 ◽  
pp. 63-69
Author(s):  
James R. Munis

We tend to assume that when 2 things are associated with each other, one must be causing the other. Nothing could be further from the truth, though. Because we're used to seeing the independent variable (‘cause’) plotted on the x-axis and the dependent variable (‘effect’) on the y-axis, this equation and graph suggest that the pressure gradient causes the paddle wheel flow rate. That, of course, is nonsense. This type of specious thinking is intended to warn you away from assuming that relationships necessarily imply causality. As you've learned already, pressure is not the same thing as energy, and pressure by itself cannot perform work or generate flow. However, flow generated by pressure-volume work (either by the heart or a mechanical pump) certainly can create pressure gradients. In this sort of chicken (flow) or egg (pressure) question, if the only energy-containing term is flow, then I'll say that the chicken came first.


1975 ◽  
Vol 12 (03) ◽  
pp. 466-476
Author(s):  
V. Barnett

Prompted by a rivulet model for the flow of liquid through packed columns we consider a simple random walk on parallel axes moving at different rates. A particle may make one of three transitions at each time instant: to the right or to the left on the axis it was on at the previous time instant, or across to the other axis. Results are obtained for the unrestricted walk, and for the walk with absorbing, or reflecting, end-points.


2020 ◽  
Vol 30 (1) ◽  
pp. 44-61 ◽  
Author(s):  
B. Jacobs

AbstractA desired closure property in Bayesian probability is that an updated posterior distribution be in the same class of distributions – say Gaussians – as the prior distribution. When the updating takes place via a statistical model, one calls the class of prior distributions the ‘conjugate priors’ of the model. This paper gives (1) an abstract formulation of this notion of conjugate prior, using channels, in a graphical language, (2) a simple abstract proof that such conjugate priors yield Bayesian inversions and (3) an extension to multiple updates. The theory is illustrated with several standard examples.


Sign in / Sign up

Export Citation Format

Share Document