Weibull analysis of component failure data from accelerated testing

1987 ◽  
Vol 19 (3) ◽  
pp. 237-243 ◽  
Author(s):  
Martin Shaw
Author(s):  
M. XIE ◽  
T.N. GOH

In this paper the problem of system-level reliability growth estimation using component-level failure data is studied. It is suggested that system failure data should be broken down into component, or subsystem, failure data when the above problems have occurred during the system testing phase. The proposed approach is especially useful when the system is not unchanged over the time, when some subsystems are improved more than others, or when the testing has been concentrated on different components at different time. These situations usually happen in practice and it may also be the case even if the system failure data is provided. Two sets of data are used to illustrate the simple approach; one is a set of component failure data for which all subsystems are available for testing at the same time and for the other set of data, the starting times are different for different subsystems.


Author(s):  
Dan Ling ◽  
Hong-Zhong Huang ◽  
Qiang Miao ◽  
Bo Yang

The Weibull distribution is widely used in life testing and reliability studies. Weibull analysis is the process of discovering the trends in product or system failure data, and using them to predict future failures in similar situations. Support Vector Regression is a machine learning method based on statistical learning theory, which has been applied successfully to solve forecasting problems in many fields. In this paper, support vector regression is used to build a parameter estimating model for Weibull distribution. Numerical examples are presented to show good performance of this method.


Author(s):  
Steve J. Murray ◽  
Rose M. Ray ◽  
Helene L. Grossman

Weibull analysis is a powerful predictive tool for studying failure trends of engineering systems. [1] One noted shortcoming is that traditional techniques require the size of the susceptible population to be known. The method described in this paper allows for estimation of the size of the susceptible population using only failure data and no assumptions about total population size or susceptible portion. In the analysis of failures of mass-produced products, a large amount of failure data may be available, but all the conditions that define the susceptible population may never be known. For example, units with a particular usage condition may be expected to fail over time following a Weibull model, but the number of units subjected to that usage condition may never be known. To assume that the entire population is susceptible to the failure mode would greatly over-predict future failures, and the model could not be used to guide decision-making. By doing a least squares fit to the trend of failures versus time, a Weibull model can be fit to the data and then used to estimate the total number of susceptible units expected in the population. The ability to accurately estimate the size of the susceptible sub-population from failure data will be explored as a function of the size of the data set used, for known sets of failure data. For example, for a failure distribution that has increased, peaked, and then decreased to zero, almost the entire population has failed, so an estimate of the size of the susceptible population from this data is likely to be accurate. On the contrary, for only a few data points that show an increasing failure rate over time, little can be determined. Monte Carlo simulations will be used in order to estimate the error associated with this technique. Our analysis will show that predictions of total susceptible populations become similar to the actual susceptible populations when the predicted mean time to failure (MTTF) from the observed data is shorter than the observation time. In effect, predictions become accurate when it is clear to the observer that the number of failures per unit time has peaked.


Sign in / Sign up

Export Citation Format

Share Document