scholarly journals A Software Reliability Model with a Weibull Fault Detection Rate Function Subject to Operating Environments

Author(s):  
Kwang Yoon Song ◽  
In Hong Chang ◽  
Hoang Pham

The main focus when developing software is to improve the reliability and stability of a software system. When software systems are introduced, these systems are used in field environments that are the same as or close to those used in the development-testing environment; however, they may also be used in many different locations that may differ from the environment in which they were developed and tested. In this paper, we propose a new software reliability model that takes into account the uncertainty of operating environments. The explicit mean value function solution for the proposed model is presented. Examples are presented to illustrate the goodness-of-fit of the proposed model and several existing non-homogeneous Poisson process (NHPP) models and confidence intervals of all models based on two sets of failure data collected from software applications. The results show that the proposed model fits the data more closely than other existing NHPP models to a significant extent.

Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 521 ◽  
Author(s):  
Song ◽  
Chang ◽  
Pham

The non-homogeneous Poisson process (NHPP) software has a crucial role in computer systems. Furthermore, the software is used in various environments. It was developed and tested in a controlled environment, while real-world operating environments may be different. Accordingly, the uncertainty of the operating environment must be considered. Moreover, predicting software failures is commonly an important part of study, not only for software developers, but also for companies and research institutes. Software reliability model can measure and predict the number of software failures, software failure intervals, software reliability, and failure rates. In this paper, we propose a new model with an inflection factor of the fault detection rate function, considering the uncertainty of operating environments and analyzing how the predicted value of the proposed new model is different than the other models. We compare the proposed model with several existing NHPP software reliability models using real software failure datasets based on ten criteria. The results show that the proposed new model has significantly better goodness-of-fit and predictability than the other models.


Author(s):  
D. DAMODARAN ◽  
B. RAVIKUMAR ◽  
VELIMUTHU RAMACHANDRAN

Reliability statistics is divided into two mutually exclusive camps and they are Bayesian and Classical. The classical statistician believes that all distribution parameters are fixed values whereas Bayesians believe that parameters are random variables and have a distribution of their own. Bayesian approach has been applied for the Software Failure data and as a result of that several Bayesian Software Reliability Models have been formulated for the last three decades. A Bayesian approach to software reliability measurement was taken by Littlewood and Verrall [A Bayesian reliability growth model for computer software, Appl. Stat. 22 (1973) 332–346] and they modeled hazard rate as a random variable. In this paper, a new Bayesian software reliability model is proposed by combining two prior distributions for predicting the total number of failures and the next failure time of the software. The popular and realistic Jelinski and Moranda (J&M) model is taken as a base for bringing out this model by applying Bayesian approach. It is assumed that one of the parameters of JM model N, number of faults in the software follows uniform prior distribution and another failure rate parameter Φi follows gama prior distribution. The joint prior p(N, Φi) is obtained by combining the above two prior distributions. In this Bayesian model, the time between failures follow exponential distribution with failure rate parameter with stochastically decreasing order on successive failure time intervals. The reasoning for the assumption on the parameter is that the intention of the software tester to improve the software quality by the correction of each failure. With Bayesian approach, the predictive distribution has been arrived at by combining exponential Time between Failures (TBFs) and joint prior p(N, Φi). For the parameter estimation, maximum likelihood estimation (MLE) method has been adopted. The proposed Bayesian software reliability model has been applied to two sets of act. The proposed model has been applied to two sets of actual software failure data and it has been observed that the predicted failure times as per the proposed model are closer to the actual failure times. The predicted failure times based on Littlewood–Verall (LV) model is also computed. Sum of square errors (SSE) criteria has been used for comparing the actual time between failures and predicted time between failures based on proposed model and LV model.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 450 ◽  
Author(s):  
Kwang Yoon Song ◽  
In Hong Chang ◽  
Hoang Pham

We have been attempting to evaluate software quality and improve its reliability. Therefore, research on a software reliability model was part of the effort. Currently, software is used in various fields and environments; hence, one must provide quantitative confidence standards when using software. Therefore, we consider the testing coverage and uncertainty or randomness of an operating environment. In this paper, we propose a new testing coverage model based on NHPP software reliability with the uncertainty of operating environments, and we provide a sensitivity analysis to study the impact of each parameter of the proposed model. We examine the goodness-of-fit of a new testing coverage model based on NHPP software reliability and other existing models based on two datasets. The comparative results for the goodness-of-fit show that the proposed model does significantly better than the existing models. In addition, the results for the sensitivity analysis show that the parameters of the proposed model affect the mean value function.


Author(s):  
LEV V. UTKIN ◽  
SERGEY V. GUROV ◽  
MAXIM I. SHUBINSKY

A fuzzy software reliability model is proposed where the time intervals between the software failures are taken as the fuzzy variables governed by a membership function. The model takes into account the following assumptions: new faults may be introduced into the software during debugging processes, the number of faults removed after a failure may be greater than one, and there is a growth of human experience during debugging. The model can be considered as an extension of the model developed by Cai, Wen and Zhang. An efficient algorithm is presented for estimating parameters of the model. The numerical examples validate the proposed model.


2020 ◽  
Vol 30 (3) ◽  
pp. 273-288
Author(s):  
Rajat Arora ◽  
Anu Aggarwal

In today's World, to meet the demand of high quality and reliable software systems, it is imperative to perform comprehensive testing and debugging of the software code. The fault detection and removal rate may change over time. The time point after which the rates are changed is termed as the change point. Practically, the failure count may not coincide with the total fault count removed from the system. Their ratio is measured by Fault Reduction Factor (FRF). Here, we propose a Weibull testing effort dependent Software Reliability Growth Model with logistic FRF and change point for assessing the failure phenomenon of a software system. The models have been validated on two real software fault datasets. The parameters are estimated using Least squares and various criteria are employed to check the goodness of fit. The comparison is also facilitated with the existing models in literature to demonstrate that proposed model has better performance.


Sign in / Sign up

Export Citation Format

Share Document