scholarly journals Bayesian Analysis of Weibull-Lindley Distribution Using Different Loss Functions

Author(s):  
Innocent Boyle Eraikhuemen ◽  
Olateju Alao Bamigbala ◽  
Umar Alhaji Magaji ◽  
Bassa Shiwaye Yakura ◽  
Kabiru Ahmed Manju

In the present paper, a three-parameter Weibull-Lindley distribution is considered for Bayesian analysis. The estimation of a shape parameter of Weibull-Lindley distribution is obtained with the help of both the classical and Bayesian methods. Bayesian estimators are obtained by using Jeffrey’s prior, uniform prior and Gamma prior under square error loss function, quadratic loss function and Precautionary loss function. Estimation by the method of Maximum likelihood is also discussed. These methods are compared by using mean square error through simulation study with varying parameter values and sample sizes.

Author(s):  
Terna G. Ieren ◽  
Pelumi E. Oguntunde

We considered the Bayesian analysis of a shape parameter of the Weibull-Exponential distribution in this paper. We assumed a class of non-informative priors in deriving the corresponding posterior distributions. In particular, the Bayes estimators and associated risks were calculated under three different loss functions. The performance of the Bayes estimators was evaluated and compared to the method of maximum likelihood under a comprehensive simulation study. It was discovered that for the said parameters to be estimated, the quadratic loss function under both uniform and Jeffrey’s priors should be used for decreasing parameter values while the use of precautionary loss function can be preferred for increasing parameter values irrespective of the variations in sample size.  


Author(s):  
Terna Godfrey Ieren ◽  
Angela Unna Chukwu

In this paper, we estimate a shape parameter of the Weibull-Frechet distribution by considering the Bayesian approach under two non-informative priors using three different loss functions. We derive the corresponding posterior distributions for the shape parameter of the Weibull-Frechet distribution assuming that the other three parameters are known. The Bayes estimators and associated posterior               risks have also been derived using the three different loss functions. The performance of the Bayes estimators are evaluated and compared using a comprehensive simulation study and a real life application to find out the combination of a loss function and a prior having the minimum Bayes risk and hence producing the best results. In conclusion, this study reveals that in order to estimate the parameter in question, we should use quadratic loss function under either of the two non-informative priors used in this study.  


2017 ◽  
Vol 5 (2) ◽  
pp. 141
Author(s):  
Wajiha Nasir

In this study, Frechet distribution has been studied by using Bayesian analysis. Posterior distribution has been derived by using gamma and exponential. Bayes estimators and their posterior risks has been derived using five different loss functions. Elicitation of hyperparameters has been done by using prior predictive distributions. Simulation study is carried out to study the behavior of posterior distribution. Quasi quadratic loss function and exponential prior are found better among all.


2014 ◽  
Vol 2014 ◽  
pp. 1-21
Author(s):  
Navid Feroz

This paper is concerned with estimation of the parameter of Burr type VIII distribution under a Bayesian framework using censored samples. The Bayes estimators and associated risks have been derived under the assumption of five priors and three loss functions. The comparison among the performance of different estimators has been made in terms of posterior risks. A simulation study has been conducted in order to assess and compare the performance of different estimators. The study proposes the use of inverse Levy prior based on quadratic loss function for Bayes estimation of the said parameter.


2001 ◽  
Vol 03 (02n03) ◽  
pp. 203-211
Author(s):  
K. HELMES ◽  
C. SRINIVASAN

Let Y(t), t∈[0,1], be a stochastic process modelled as dYt=θ(t)dt+dW(t), where W(t) denotes a standard Wiener process, and θ(t) is an unknown function assumed to belong to a given set Θ⊂L2[0,1]. We consider the problem of estimating the value ℒ(θ), where ℒ is a continuous linear function defined on Θ, using linear estimators of the form <m,y>=∫m(t)dY(t), m∈L2[0,1]. The distance between the quantity ℒ(θ) and the estimated value is measured by a loss function. In this paper, we consider the loss function to be an arbitrary even power function. We provide a characterisation of the best linear mini-max estimator for a general power function which implies the characterisation for two special cases which have previously been considered in the literature, viz. the case of a quadratic loss function and the case of a quartic loss function.


1997 ◽  
Vol 9 (6) ◽  
pp. 1211-1243 ◽  
Author(s):  
David H. Wolpert

This article presents several additive corrections to the conventional quadratic loss bias-plus-variance formula. One of these corrections is appropriate when both the target is not fixed (as in Bayesian analysis) and training sets are averaged over (as in the conventional bias plus variance formula). Another additive correction casts conventional fixed-trainingset Bayesian analysis directly in terms of bias plus variance. Another correction is appropriate for measuring full generalization error over a test set rather than (as with conventional bias plus variance) error at a single point. Yet another correction can help explain the recent counterintuitive bias-variance decomposition of Friedman for zero-one loss. After presenting these corrections, this article discusses some other loss function-specific aspects of supervised learning. In particular, there is a discussion of the fact that if the loss function is a metric (e.g., zero-one loss), then there is bound on the change in generalization error accompanying changing the algorithm's guess from h1 to h2, a bound that depends only on h1 and h2 and not on the target. This article ends by presenting versions of the bias-plus-variance formula appropriate for logarithmic and quadratic scoring, and then all the additive corrections appropriate to those formulas. All the correction terms presented are a covariance, between the learning algorithm and the posterior distribution over targets. Accordingly, in the (very common) contexts in which those terms apply, there is not a “bias-variance trade-off” or a “bias-variance dilemma,” as one often hears. Rather there is a bias-variance-covariance trade-off.


Author(s):  
Maria Sivak ◽  
◽  
Vladimir Timofeev ◽  

The paper considers the problem of building robust neural networks using different robust loss functions. Applying such neural networks is reasonably when working with noisy data, and it can serve as an alternative to data preprocessing and to making neural network architecture more complex. In order to work adequately, the error back-propagation algorithm requires a loss function to be continuously or two-times differentiable. According to this requirement, two five robust loss functions were chosen (Andrews, Welsch, Huber, Ramsey and Fair). Using the above-mentioned functions in the error back-propagation algorithm instead of the quadratic one allows obtaining an entirely new class of neural networks. For investigating the properties of the built networks a number of computational experiments were carried out. Different values of outliers’ fraction and various numbers of epochs were considered. The first step included adjusting the obtained neural networks, which lead to choosing such values of internal loss function parameters that resulted in achieving the highest accuracy of a neural network. To determine the ranges of parameter values, a preliminary study was pursued. The results of the first stage allowed giving recommendations on choosing the best parameter values for each of the loss functions under study. The second stage dealt with comparing the investigated robust networks with each other and with the classical one. The analysis of the results shows that using the robust technique leads to a significant increase in neural network accuracy and in a learning rate.


2011 ◽  
Vol 2011 ◽  
pp. 1-17
Author(s):  
Sanku Dey ◽  
Sudhansu S. Maiti

The Bayes estimators of the shape parameter of exponentiated family of distributions have been derived by considering extension of Jeffreys' noninformative as well as conjugate priors under different scale-invariant loss functions, namely, weighted quadratic loss function, squared-log error loss function and general entropy loss function. The risk functions of these estimators have been studied. We have also considered the highest posterior density (HPD) intervals for the parameter and the equal-tail and HPD prediction intervals for future observation. Finally, we analyze one data set for illustration.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Afrah Al-Bossly

The main contribution of this work is the development of a compound LINEX loss function (CLLF) to estimate the shape parameter of the Lomax distribution (LD). The weights are merged into the CLLF to generate a new loss function called the weighted compound LINEX loss function (WCLLF). Then, the WCLLF is used to estimate the LD shape parameter through Bayesian and expected Bayesian (E-Bayesian) estimation. Subsequently, we discuss six different types of loss functions, including square error loss function (SELF), LINEX loss function (LLF), asymmetric loss function (ASLF), entropy loss function (ENLF), CLLF, and WCLLF. In addition, in order to check the performance of the proposed loss function, the Bayesian estimator of WCLLF and the E-Bayesian estimator of WCLLF are used, by performing Monte Carlo simulations. The Bayesian and expected Bayesian by using the proposed loss function is compared with other methods, including maximum likelihood estimation (MLE) and Bayesian and E-Bayesian estimators under different loss functions. The simulation results show that the Bayes estimator according to WCLLF and the E-Bayesian estimator according to WCLLF proposed in this work have the best performance in estimating the shape parameters based on the least mean averaged squared error.


Sign in / Sign up

Export Citation Format

Share Document