Random wear models in reliability theory

1971 ◽  
Vol 3 (2) ◽  
pp. 229-248 ◽  
Author(s):  
David S. Reynolds ◽  
I. Richard Savage

Gaver (1963) and Antelman and Savage (1965) have proposed models for the distribution of the time to failure of a simple device exposed to a randomly varying environment. Each model represents cumulative wear as a specified function of a non-negative stochastic process with independent increments, and assumes that the reliability of the device is conditioned upon realizations of this process. From these models are derived the corresponding unconditional joint distributions for the random failure time vector of n independent, identical devices exposed to the same realization of the wear process. It is shown that the identical failure time distribution for one component can arise from each model. In the Gaver model simultaneous failure times occur with positive probability. The probabilities of specific tie configurations are developed.For an interesting class of Gaver models involving a time scale parameter, the maximum likelihood estimates from several devices in one environment are examined. In that case the tie configuration probability does not depend on the parameter. For the corresponding Antelman-Savage models a consistent sequence of estimators is obtained; the maximum likelihood theory did not appear tractable.

1971 ◽  
Vol 3 (02) ◽  
pp. 229-248 ◽  
Author(s):  
David S. Reynolds ◽  
I. Richard Savage

Gaver (1963) and Antelman and Savage (1965) have proposed models for the distribution of the time to failure of a simple device exposed to a randomly varying environment. Each model represents cumulative wear as a specified function of a non-negative stochastic process with independent increments, and assumes that the reliability of the device is conditioned upon realizations of this process. From these models are derived the corresponding unconditional joint distributions for the random failure time vector of n independent, identical devices exposed to the same realization of the wear process. It is shown that the identical failure time distribution for one component can arise from each model. In the Gaver model simultaneous failure times occur with positive probability. The probabilities of specific tie configurations are developed. For an interesting class of Gaver models involving a time scale parameter, the maximum likelihood estimates from several devices in one environment are examined. In that case the tie configuration probability does not depend on the parameter. For the corresponding Antelman-Savage models a consistent sequence of estimators is obtained; the maximum likelihood theory did not appear tractable.


2008 ◽  
Vol 25 (06) ◽  
pp. 847-864 ◽  
Author(s):  
TAE HYOUNG KANG ◽  
SANG WOOK CHUNG ◽  
WON YOUNG YUN

An analytical model is developed for accelerated performance degradation tests. The performance degradations of products at a specified exposure time are assumed to follow a normal distribution. It is assumed that the relationship between the location parameter of normal distribution and the exposure time is a linear function of the exposure time that the slope coefficient of the linear relationship has an Arrhenius dependence on temperature, and that the scale parameter of the normal distribution is constant and independent of temperature or exposure time. The method of maximum likelihood estimation is used to estimate the parameters involved. The likelihood function for the accelerated performance degradation data is derived. The approximated variance-covariance matrix is also derived for calculating approximated confidence intervals of maximum likelihood estimates. Finally we use two real examples for estimating the failure-time distribution, technically defined as the time when performance degrades below a specified level.


2019 ◽  
Vol 23 (2) ◽  
pp. 251-268
Author(s):  
Ruixuan Liu ◽  
Zhengfei Yu

Summary We study accelerated failure time models in which the survivor function of the additive error term is log-concave. The log-concavity assumption covers large families of commonly used distributions and also represents the aging or wear-out phenomenon of the baseline duration. For right-censored failure time data, we construct semiparametric maximum likelihood estimates of the finite-dimensional parameter and establish the large sample properties. The shape restriction is incorporated via a nonparametric maximum likelihood estimator of the hazard function. Our approach guarantees the uniqueness of a global solution for the estimating equations and delivers semiparametric efficient estimates. Simulation studies and empirical applications demonstrate the usefulness of our method.


Author(s):  
Chandra Shekhar ◽  
Neeraj Kumar ◽  
Madhu Jain ◽  
Amit Gupta

In this paper, we investigate the reliability and queueing performance indices for the fault-tolerant computing network having a finite number of unreliable operating components with the provision of warm standby components. Operating and standby components are governed by dedicated software which is also prone to random failure. On failure of operating components, available standby component(s) may switch from the standby state to operating state with negligible switchover time. The switchover process may also fail due to some automation hindrance. The computing network is also subjected to common cause failure in lieu of external cause. The studied redundant fault-tolerant computing network is framed as a Markovian machine interference model with exponentially distributed inter-failure times and service times. For the reliability prediction of the computing network, various performance measures, namely, mean-time-to-failure (MTTF), reliability/availability, failure frequency, etc., have been formulated in terms of transient-state probabilities which we have obtained using the spectral method. To show the practicability of the developed model, numerical simulation has been done. Sensitivity analysis of reliability and other indices of the computing network with respect to different network parameters has been presented, and results are summarized in the tables and graphs. Finally, future scope and concluding remarks have been included.


2002 ◽  
Vol 39 (2) ◽  
pp. 296-311 ◽  
Author(s):  
Jie Mi

Suppose that there is a sequence of programs or jobs that are scheduled to be executed one after another on a computer. A program may terminate its execution because of the failure of the computer, which will obliterate all work the computer has accomplished, and the program has to be run all over again. Hence, it is common to save the work just completed after the computer has been working for a certain amount of time, say y units. It is assumed that it takes a certain time to perform a save. During the saving process, the computer is still subject to random failure. No matter when the computer failure occurs, it is assumed that the computer will be repaired completely and the repair time will be negligible. If saving is successful, then the computer will continue working from the end of the last saved work; if the computer fails during the saving process, then only unsaved work needs to be repeated. This paper discusses the optimal work size y under which the long-run average amount of work saved is maximized. In particular, the case of an exponential failure time distribution is studied in detail. The properties of the optimal age-replacement policy are also derived when the work size y is fixed.


Author(s):  
Alexander Galenko ◽  
Elmira Popova ◽  
Ernie Kee ◽  
Rick Grantom

We analyze a system of N components with dependent failure times. The goal is to obtain the optimal block replacement interval (different for each component) over a finite horizon that minimizes the expected total maintenance cost. In addition, we allow each preventive maintenance action to change the future joint failure time distribution. We illustrate our methodology with an example from South Texas Project Nuclear Operating Company.


2002 ◽  
Vol 39 (02) ◽  
pp. 296-311 ◽  
Author(s):  
Jie Mi

Suppose that there is a sequence of programs or jobs that are scheduled to be executed one after another on a computer. A program may terminate its execution because of the failure of the computer, which will obliterate all work the computer has accomplished, and the program has to be run all over again. Hence, it is common to save the work just completed after the computer has been working for a certain amount of time, say y units. It is assumed that it takes a certain time to perform a save. During the saving process, the computer is still subject to random failure. No matter when the computer failure occurs, it is assumed that the computer will be repaired completely and the repair time will be negligible. If saving is successful, then the computer will continue working from the end of the last saved work; if the computer fails during the saving process, then only unsaved work needs to be repeated. This paper discusses the optimal work size y under which the long-run average amount of work saved is maximized. In particular, the case of an exponential failure time distribution is studied in detail. The properties of the optimal age-replacement policy are also derived when the work size y is fixed.


2003 ◽  
Vol 35 (03) ◽  
pp. 755-772
Author(s):  
Thierry Duchesne ◽  
Jeffrey S. Rosenthal

In this paper we derive conditions on the internal wear process under which the resulting time to failure model will be of the simple collapsible form when the usage accumulation history is available. We suppose that failure occurs when internal wear crosses a certain threshold or a traumatic event causes the item to fail. We model the infinitesimal increment in internal wear as a function of time, accumulated internal wear, and usage history, and we derive conditions on this function to get a collapsible model for the distribution of time to failure given the usage history. We reach the conclusion that collapsible models form the subset of accelerated failure time models with time-varying covariates for which the time transformation function satisfies certain simple properties.


2009 ◽  
Vol 09 (02) ◽  
pp. 369-381
Author(s):  
SARALEES NADARAJAH ◽  
SAMUEL KOTZ

For systems with parallel components, the variable of primary importance is the maximum of the failure times of the different components. In this paper, we study the exact probability distribution of the maximum failure time. Explicit expressions are derived for the cumulative distribution function, probability density function, hazard rate function, moment-generating function, nth moment, variance, skewness, kurtosis, mean deviation, Shannon entropy, and the order statistics. Estimation procedures are derived by the methods of moments and maximum likelihood. We expect that these results could be useful for performance assessment of parallel systems.


2019 ◽  
Vol 52 (2) ◽  
pp. 151-171
Author(s):  
FIAZ AHMAD BHATTI

In this paper, a flexible distribution with increasing, bathtub and inverted bathtub hazard rate called Modified Burr III-Power (MBIII-Power) is developed on the basis of the generalized Pearson differential equation. The density function of MBIII-Power is arc, exponential and positively skewed shaped. Descriptive measures such as quantiles, moments, incomplete moments, inequality measures, residual life functions and reliability measures are theoretically established. The MBIII-Power distribution is characterized via different techniques. Parameters of MBIII-Power distribution are estimated using maximum likelihood method. The simulation study is performed to illustrate the performance of the maximum likelihood estimates (MLEs). Potential use of MBIII-Power distribution is demonstrated by its application to two data sets: serum-reversal time (in days) of children born from HIV-infected mothers and failure times of device data.


Sign in / Sign up

Export Citation Format

Share Document