Software reliability estimation of gamma failure time models

Author(s):  
B. Roopashri Tantri ◽  
N. N. Murulidhar
2017 ◽  
Vol 73 (5) ◽  
Author(s):  
G. Krishna Mohan ◽  
R. Satyaprasad ◽  
N. V. K. Stanley Raju

2021 ◽  
Vol 11 (15) ◽  
pp. 6998
Author(s):  
Qiuying Li ◽  
Hoang Pham

Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.


Sadhana ◽  
2009 ◽  
Vol 34 (2) ◽  
pp. 235-241 ◽  
Author(s):  
S. Chatterjee ◽  
S. S. Alam ◽  
R. B. Misra

Author(s):  
D. DAMODARAN ◽  
B. RAVIKUMAR ◽  
VELIMUTHU RAMACHANDRAN

Reliability statistics is divided into two mutually exclusive camps and they are Bayesian and Classical. The classical statistician believes that all distribution parameters are fixed values whereas Bayesians believe that parameters are random variables and have a distribution of their own. Bayesian approach has been applied for the Software Failure data and as a result of that several Bayesian Software Reliability Models have been formulated for the last three decades. A Bayesian approach to software reliability measurement was taken by Littlewood and Verrall [A Bayesian reliability growth model for computer software, Appl. Stat. 22 (1973) 332–346] and they modeled hazard rate as a random variable. In this paper, a new Bayesian software reliability model is proposed by combining two prior distributions for predicting the total number of failures and the next failure time of the software. The popular and realistic Jelinski and Moranda (J&M) model is taken as a base for bringing out this model by applying Bayesian approach. It is assumed that one of the parameters of JM model N, number of faults in the software follows uniform prior distribution and another failure rate parameter Φi follows gama prior distribution. The joint prior p(N, Φi) is obtained by combining the above two prior distributions. In this Bayesian model, the time between failures follow exponential distribution with failure rate parameter with stochastically decreasing order on successive failure time intervals. The reasoning for the assumption on the parameter is that the intention of the software tester to improve the software quality by the correction of each failure. With Bayesian approach, the predictive distribution has been arrived at by combining exponential Time between Failures (TBFs) and joint prior p(N, Φi). For the parameter estimation, maximum likelihood estimation (MLE) method has been adopted. The proposed Bayesian software reliability model has been applied to two sets of act. The proposed model has been applied to two sets of actual software failure data and it has been observed that the predicted failure times as per the proposed model are closer to the actual failure times. The predicted failure times based on Littlewood–Verall (LV) model is also computed. Sum of square errors (SSE) criteria has been used for comparing the actual time between failures and predicted time between failures based on proposed model and LV model.


Sign in / Sign up

Export Citation Format

Share Document