E-Bayesian Estimation of Failure Probability under Zero-Failure Data with Double Hyper Parameters

2012 ◽  
Vol 190-191 ◽  
pp. 977-981 ◽  
Author(s):  
Xian Bin Wu

This paper presents the Bayesian analysis of the zero-failure data with double hyper parameters a, b. We take prior distribution of failure probability pi be its conjugated distribution—Beta (pi-1, 1, 1, b) and hyper parameter b as the uniform distribution in (1, c). With quadratic loss function, If pi  (pi-1, 1), the E-Bayesian estimation of pi is . When 0 < c < si, and satisfy (I) ;(II) . The results satisfy . The properties of E-Bayesian estimation are given. A Simulation example is discussed, which shows that the method is both efficiency and easy to operate.

Author(s):  
Elizabeth Cudney ◽  
Bonnie Paris

Using the quadratic loss function is one way to quantify a fundamental value in the provision of health care services: we must provide the best care and best service to every patient, every time. Sole reliance on specification limits leads to a focus on “acceptable” performance rather than “ideal” performance. This paper presents the application of the quadratic loss function to quantify improvement opportunities in the healthcare industry.


1997 ◽  
Vol 9 (6) ◽  
pp. 1211-1243 ◽  
Author(s):  
David H. Wolpert

This article presents several additive corrections to the conventional quadratic loss bias-plus-variance formula. One of these corrections is appropriate when both the target is not fixed (as in Bayesian analysis) and training sets are averaged over (as in the conventional bias plus variance formula). Another additive correction casts conventional fixed-trainingset Bayesian analysis directly in terms of bias plus variance. Another correction is appropriate for measuring full generalization error over a test set rather than (as with conventional bias plus variance) error at a single point. Yet another correction can help explain the recent counterintuitive bias-variance decomposition of Friedman for zero-one loss. After presenting these corrections, this article discusses some other loss function-specific aspects of supervised learning. In particular, there is a discussion of the fact that if the loss function is a metric (e.g., zero-one loss), then there is bound on the change in generalization error accompanying changing the algorithm's guess from h1 to h2, a bound that depends only on h1 and h2 and not on the target. This article ends by presenting versions of the bias-plus-variance formula appropriate for logarithmic and quadratic scoring, and then all the additive corrections appropriate to those formulas. All the correction terms presented are a covariance, between the learning algorithm and the posterior distribution over targets. Accordingly, in the (very common) contexts in which those terms apply, there is not a “bias-variance trade-off” or a “bias-variance dilemma,” as one often hears. Rather there is a bias-variance-covariance trade-off.


2015 ◽  
Vol 26 (6) ◽  
pp. 1537-1545 ◽  
Author(s):  
Jooyong Shim ◽  
Malsuk Kim ◽  
Kyungha Seok

Sign in / Sign up

Export Citation Format

Share Document