Bayesian inference of bivariate Weibull geometric model based on LINEX and quadratic loss functions

Author(s):  
Mehdi Basikhasteh ◽  
Iman Makhdoom
2014 ◽  
Vol 2014 ◽  
pp. 1-21
Author(s):  
Navid Feroz

This paper is concerned with estimation of the parameter of Burr type VIII distribution under a Bayesian framework using censored samples. The Bayes estimators and associated risks have been derived under the assumption of five priors and three loss functions. The comparison among the performance of different estimators has been made in terms of posterior risks. A simulation study has been conducted in order to assess and compare the performance of different estimators. The study proposes the use of inverse Levy prior based on quadratic loss function for Bayes estimation of the said parameter.


2020 ◽  
Vol 62 ◽  
pp. 102117
Author(s):  
Yuyang Qian ◽  
Kaiming Yang ◽  
Yu Zhu ◽  
Wei Wang ◽  
Chenhui Wan

2021 ◽  
Author(s):  
Dmytro Perepolkin ◽  
Benjamin Goodrich ◽  
Ullrika Sahlin

This paper extends the application of indirect Bayesian inference to probability distributions defined in terms of quantiles of the observable quantities. Quantile-parameterized distributions are characterized by high shape flexibility and interpretability of its parameters, and are therefore useful for elicitation on observables. To encode uncertainty in the quantiles elicited from experts, we propose a Bayesian model based on the metalog distribution and a version of the Dirichlet prior. The resulting “hybrid” expert elicitation protocol for characterizing uncertainty in parameters using questions about the observable quantities is discussed and contrasted to parametric and predictive elicitation.


2019 ◽  
Vol 52 (8) ◽  
pp. 130-135 ◽  
Author(s):  
Hojoon Lee ◽  
Heungseok Chae ◽  
Kyongsu Yi

1996 ◽  
Vol 8 (7) ◽  
pp. 1391-1420 ◽  
Author(s):  
David H. Wolpert

This is the second of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. The first paper discusses a particular set of ways to compare learning algorithms, according to which there are no distinctions between learning algorithms. This second paper concentrates on different ways of comparing learning algorithms from those used in the first paper. In particular this second paper discusses the associated a priori distinctions that do exist between learning algorithms. In this second paper it is shown, loosely speaking, that for loss functions other than zero-one (e.g., quadratic loss), there are a priori distinctions between algorithms. However, even for such loss functions, it is shown here that any algorithm is equivalent on average to its “randomized” version, and in this still has no first principles justification in terms of average error. Nonetheless, as this paper discusses, it may be that (for example) cross-validation has better head-to-head minimax properties than “anti-cross-validation” (choose the learning algorithm with the largest cross-validation error). This may be true even for zero-one loss, a loss function for which the notion of “randomization” would not be relevant. This paper also analyzes averages over hypotheses rather than targets. Such analyses hold for all possible priors over targets. Accordingly they prove, as a particular example, that cross-validation cannot be justified as a Bayesian procedure. In fact, for a very natural restriction of the class of learning algorithms, one should use anti-cross-validation rather than cross-validation (!).


Sign in / Sign up

Export Citation Format

Share Document