conjugate prior
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 11)

H-INDEX

11
(FIVE YEARS 1)

2022 ◽  
Vol 4 ◽  
Author(s):  
Ying-Ying Zhang ◽  
Teng-Zhong Rong ◽  
Man-Man Li

For the normal model with a known mean, the Bayes estimation of the variance parameter under the conjugate prior is studied in Lehmann and Casella (1998) and Mao and Tang (2012). However, they only calculate the Bayes estimator with respect to a conjugate prior under the squared error loss function. Zhang (2017) calculates the Bayes estimator of the variance parameter of the normal model with a known mean with respect to the conjugate prior under Stein’s loss function which penalizes gross overestimation and gross underestimation equally, and the corresponding Posterior Expected Stein’s Loss (PESL). Motivated by their works, we have calculated the Bayes estimators of the variance parameter with respect to the noninformative (Jeffreys’s, reference, and matching) priors under Stein’s loss function, and the corresponding PESLs. Moreover, we have calculated the Bayes estimators of the scale parameter with respect to the conjugate and noninformative priors under Stein’s loss function, and the corresponding PESLs. The quantities (prior, posterior, three posterior expectations, two Bayes estimators, and two PESLs) and expressions of the variance and scale parameters of the model for the conjugate and noninformative priors are summarized in two tables. After that, the numerical simulations are carried out to exemplify the theoretical findings. Finally, we calculate the Bayes estimators and the PESLs of the variance and scale parameters of the S&P 500 monthly simple returns for the conjugate and noninformative priors.


2020 ◽  
Vol 16 (3) ◽  
pp. 382
Author(s):  
Ferra YANUAR ◽  
Cici Saputri

The purpose of this study is to determine the best estimator for estimating the shape   parameters of the Pareto distribution with the known  scale parameter. Estimation of these parameters is done by using the Gamma distribution as the prior distribution of the conjugate and the Uniform distribution as the non-conjugate prior distribution. A comparison of the two prior distributions is done through simulation studies with various sample sizes. The best estimator net is a method that produces the smallest posterior variance, absolute bias, and Bayes confidence interval. This study proves that the Bayes estimator by using the prior conjugate distribution produces all indicators of the goodness of the model with a smaller value than the non-conjugate prior distribution. Thus it can be concluded that the estimator with prior conjugate will produce a better predictive value than prior non-conjugate.


2020 ◽  
Vol 56 (1) ◽  
pp. 208-225 ◽  
Author(s):  
Karl Granstrom ◽  
Maryam Fatemi ◽  
Lennart Svensson

2020 ◽  
Vol 30 (1) ◽  
pp. 44-61 ◽  
Author(s):  
B. Jacobs

AbstractA desired closure property in Bayesian probability is that an updated posterior distribution be in the same class of distributions – say Gaussians – as the prior distribution. When the updating takes place via a statistical model, one calls the class of prior distributions the ‘conjugate priors’ of the model. This paper gives (1) an abstract formulation of this notion of conjugate prior, using channels, in a graphical language, (2) a simple abstract proof that such conjugate priors yield Bayesian inversions and (3) an extension to multiple updates. The theory is illustrated with several standard examples.


2019 ◽  
Vol 4 (3) ◽  
pp. 82 ◽  
Author(s):  
Ferra Yanuar ◽  
Hazmira Yozza ◽  
Ratna Vrima Rescha

This present study purposes to conduct Bayesian inference for scale parameters, denoted by , from Weibull distribution. The prior distribution chosen in this study is the prior conjugate, that is inverse gamma and non-informative prior, namely Jeffreys’ prior. This research also aims to study several theoretical properties of posterior distribution based on prior used and then implement it to generated data and make comparison between both Bayes estimator as well. The method used to evaluate the best estimator is based on the smallest Mean Square Error (MSE). This study proved that Bayes estimator using conjugate prior produces parameter value that is better estimate than the non-informative prior since it produces smaller MSE value, for condition scale parameter value more than one based on analytic and simulation study. Meanwhile for scale parameter value less than one,  it could not yielded the good estimated value.


Author(s):  
Silvia Miranda-Agrippino ◽  
Giovanni Ricco

Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection. In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots. Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted. Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.


2019 ◽  
Vol 3 (2) ◽  
pp. 59-65
Author(s):  

The aim of this study is to predict the next day PM10 concentration using Bayesian Regression with noninformative prior and conjugate prior models. The descriptive analysis of PM10, temperature, relative humidity, nitrogen dioxide (NO2), sulphur dioxide (SO2), carbon monoxide (CO) and ozone (O3) are also included. A case study used two-years of air quality monitoring data at three (3) monitoring stations to predict the future PM10 concentration with seven parameters (PM10, temperature, relative humidity, NO2, SO2, CO, and O3). The descriptive analysis showed that the highest mean PM10 concentration occurred at Klang station in 2011 (71.30 µg/m3) followed by 2012 (68.82 µg/m3). The highest mean PM10 concentration was at Nilai in 2012 (68.86 µg/m3) followed by 2011 (66.29µg/m3) respectively. The results showed that the Bayesian regression model used a conjugate prior with a normal-gamma prior which was a good model to predict the PM10 concentration for most study stations with (R2 = 0.67 at Jerantut station), (R2 = 0.61 at Nilai station) and (R2 = 0.66 at Klang station) respectively compared to a non-informative prior.


Sign in / Sign up

Export Citation Format

Share Document