A software reliability model with optimal selection of failure data applying a mean square error criterion

1993 ◽  
Author(s):  
Norman Schneidewind
2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Kamlesh Kumar Raghuvanshi ◽  
Arun Agarwal ◽  
Khushboo Jain ◽  
V. B. Singh

AbstractIn this work, we propose a time-variant software reliability model (SRM)which considers the fault detection and the highest number of faults in software. The time-variant genetic algorithm process is implemented for the assessment of the SRM parameters. The proposed model works upon a non-homogeneous Poisson process (NHPP) and incorporates fault dependent detection and software failure intensity and the un-removed error in the software. We had considered programmers proficiency, software complexity, organization hierarchy, and perfect debugging as the determining factors for SRM. The dataset collected from 74 software projects was experimented with to establish and validate the proposed software reliability model's better fit. Data is collected over a period, which is initiated with the start of the project and is continuously monitored until its completion. Several parameters are analyzed, and a collection of 115 attributes are given with 11 different time frames in terms of product and process characteristics. A total of 383 persons were involved in software design, where the issue count total is 255. The proposed time-variant fault detection SRM is implemented in Jira and is also compared with the existing reliability model presented in the literature. It is observed that the proposed fault detection SRM works better in terms of different parameters like mean square error (MSE), root mean square error (RMSE), and r-squared (R2). The work is carried out, ensuring time-varying fault detection, which is measured by considering response count, coding and non-coding deliverables, and the number of bugs in the software. We considered the programmer's proficiency, software complexity, organization hierarchy, and perfect debugging as the determining factors for presenting the software reliability model. The proposed Software reliability model shows improvement over existing algorithms as the residual errors are reduced, and prediction accuracy is high in terms of cumulative fault detection.


Author(s):  
Nguyen Cao Thang ◽  
Luu Xuan Hung

The paper presents a performance analysis of global-local mean square error criterion of stochastic linearization for some nonlinear oscillators. This criterion of stochastic linearization for nonlinear oscillators bases on dual conception to the local mean square error criterion (LOMSEC). The algorithm is generally built to multi degree of freedom (MDOF) nonlinear oscillators. Then, the performance analysis is carried out for two applications which comprise a rolling ship oscillation and two degree of freedom one. The improvement on accuracy of the proposed criterion has been shown in comparison with the conventional Gaussian equivalent linearization (GEL).


Author(s):  
FAROKH B. BASTANI ◽  
ING-RAY CHEN ◽  
TA-WEI TSAO

In this paper we develop a software reliability model for Artificial Intelligence (AI) programs. We show that conventional software reliability models must be modified to incorporate certain special characteristics of AI programs, such as (1) failures due to intrinsic faults, e.g., limitations due to heuristics and other basic AI techniques, (2) fuzzy correctness criterion, i.e., difficulty in accurately classifying the output of some AI programs as correct or incorrect, (3) planning-time versus execution-time tradeoffs, and (4) reliability growth due to an evolving knowledge base. We illustrate the approach by modifying the Musa-Okumoto software reliability growth model to incorporate failures due to intrinsic faults and to accept fuzzy failure data. The utility of the model is exemplified with a robot path-planning problem.


Author(s):  
D. DAMODARAN ◽  
B. RAVIKUMAR ◽  
VELIMUTHU RAMACHANDRAN

Reliability statistics is divided into two mutually exclusive camps and they are Bayesian and Classical. The classical statistician believes that all distribution parameters are fixed values whereas Bayesians believe that parameters are random variables and have a distribution of their own. Bayesian approach has been applied for the Software Failure data and as a result of that several Bayesian Software Reliability Models have been formulated for the last three decades. A Bayesian approach to software reliability measurement was taken by Littlewood and Verrall [A Bayesian reliability growth model for computer software, Appl. Stat. 22 (1973) 332–346] and they modeled hazard rate as a random variable. In this paper, a new Bayesian software reliability model is proposed by combining two prior distributions for predicting the total number of failures and the next failure time of the software. The popular and realistic Jelinski and Moranda (J&M) model is taken as a base for bringing out this model by applying Bayesian approach. It is assumed that one of the parameters of JM model N, number of faults in the software follows uniform prior distribution and another failure rate parameter Φi follows gama prior distribution. The joint prior p(N, Φi) is obtained by combining the above two prior distributions. In this Bayesian model, the time between failures follow exponential distribution with failure rate parameter with stochastically decreasing order on successive failure time intervals. The reasoning for the assumption on the parameter is that the intention of the software tester to improve the software quality by the correction of each failure. With Bayesian approach, the predictive distribution has been arrived at by combining exponential Time between Failures (TBFs) and joint prior p(N, Φi). For the parameter estimation, maximum likelihood estimation (MLE) method has been adopted. The proposed Bayesian software reliability model has been applied to two sets of act. The proposed model has been applied to two sets of actual software failure data and it has been observed that the predicted failure times as per the proposed model are closer to the actual failure times. The predicted failure times based on Littlewood–Verall (LV) model is also computed. Sum of square errors (SSE) criteria has been used for comparing the actual time between failures and predicted time between failures based on proposed model and LV model.


Sign in / Sign up

Export Citation Format

Share Document