Bayesian Software Reliability Prediction Based on Yamada Delayed S-Shaped Model

2014 ◽  
Vol 490-491 ◽  
pp. 1267-1278 ◽  
Author(s):  
Tean Quay Lee ◽  
Chun Wu Yeh ◽  
Chih Chiang Fang

Software Reliability Growth Models (SRGMs) provide techniques to predict future failure behavior from known characteristics of the software testing work. However, in some cases, software developers did not have sufficient historical data to estimate the corresponding reliability and the expected testing cost, especially for a newly developed software project, and thus the results obtained from analytical models may not be reliable. In such situations, Bayesian analysis is a reasonable approach to additionally take expert's opinions into account for better decision making. In this paper, we utilized Yamada Delayed S-shaped Model with Bayesian analysis in predicting software reliability and expected testing costs to determine an optimal release time for software systems. Besides, the failure process of software are assumed to be drawn from a non-homogeneous Poisson process (NHPP), and the parameters of the proposed model are assumed to be mutually independent and Gamma distributed. Finally, a numerical example is given to verify the effectiveness of the proposed approach, and the sensitive and risk analyses are performed in light of the numerical example.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rama Rao Narvaneni ◽  
K. Suresh Babu

PurposeSoftware reliability growth models (SRGMs) are used to assess and predict reliability of a software system. Many of these models are effective in predicting future failures unless the software evolves.Design/methodology/approachThis objective of this paper is to identify the best path for rectifying the BFT (bug fixing time) and BFR (bug fixing rate). Moreover, the flexible software project has been examined while materializing the BFR. To enhance the BFR, the traceability of bug is lessened by the version tag virtue in every software deliverable component. The release time of software build is optimized with the utilization of mathematical optimization mechanisms like ‘software reliability growth’ and ‘non-homogeneous Poisson process methods.’FindingsIn current market scenario, this is most essential. The automation and variation of build is also resolved in this contribution. Here, the software, which is developed, is free from the bugs or defects and enhances the quality of software by increasing the BFR.Originality/valueIn current market scenario, this is most essential. The automation and variation of build is also resolved in this contribution. Here, the software, which is developed, is free from the bugs or defects and enhances the quality of software by increasing the BFR.


2016 ◽  
Vol 2016 ◽  
pp. 1-13
Author(s):  
Fan Li ◽  
Ze-Long Yi

Software reliability growth models (SRGMs) based on a nonhomogeneous Poisson process (NHPP) are widely used to describe the stochastic failure behavior and assess the reliability of software systems. For these models, the testing-effort effect and the fault interdependency play significant roles. Considering a power-law function of testing effort and the interdependency of multigeneration faults, we propose a modified SRGM to reconsider the reliability of open source software (OSS) systems and then to validate the model’s performance using several real-world data. Our empirical experiments show that the model well fits the failure data and presents a high-level prediction capability. We also formally examine the optimal policy of software release, considering both the testing cost and the reliability requirement. By conducting sensitivity analysis, we find that if the testing-effort effect or the fault interdependency was ignored, the best time to release software would be seriously delayed and more resources would be misplaced in testing the software.


Author(s):  
Ompal Singh ◽  
Saurabh Panwar ◽  
P. K. Kapur

In software engineering literature, numerous software reliability growth models have been designed to evaluate and predict the reliability of the software products and to measure the optimal time-to-market of the software systems. Most existing studies on software release time assessment assumes that when software is released, its testing process is terminated. In practice, however, the testing team releases the software product first and continues the testing process for an added period in the operational phase. Therefore, in this study, a coherent reliability growth model is developed to predict the expected reliability of the software product. The debugging process is considered imperfect as new faults can be introduced into the software during each fault removal. The proposed model assumes that the fault observation rate of the testing team modifies after the software release. The release time of the software is therefore regarded as the change-point. It has been established that the veracity of the performance of the growth models escalates by incorporating the change-point theory. A unified approach is utilized to model the debugging process wherein both testers and users simultaneously identify the faults in the post-release testing phase. A joint optimization problem is formulated based on the two decision criteria: cost and reliability. In order to assimilate the manager’s preferences over these two criteria, a multi-criteria decision-making technique known as multi-attribute utility theory is employed. A numerical illustration is further presented by using actual data sets from the software project to determine the optimal software time-to-market and testing termination time.


Author(s):  
Md. Asraful Haque ◽  
Nesar Ahmad

Background: Software Reliability Growth Models (SRGMs) are most widely used mathematical models to monitor, predict and assess the software reliability. They play an important role in industries to estimate the release time of a software product. Since 1970s, researchers have suggested a large number of SRGMs to forecast software reliability based on certain assumptions. They all have explained how the system reliability changes over time by analyzing failure data set throughout the testing process. However, none of the models is universally accepted and can be used for all kinds of software. Objective: The objective of this paper is to highlight the limitations of SRGMs and to suggest a novel approach towards the improvement. Method: We have presented the mathematical basis, parameters and assumptions of software reliability model and analyzed five popular models namely Jelinski-Moranda (J-M) Model, Goel Okumoto NHPP Model, Musa-Okumoto Log Poisson Model, Gompertz Model and Enhanced NHPP Model. Conclusion: The paper focuses on the many challenges like flexibility issues, assumptions, and uncertainty factors of using SRGMs. It emphasizes considering all affecting factors in reliability calculation. A possible approach has been mentioned at the end of the paper.


This paper surveys some aspects of the state of the art of software reliability modelling. By far the greatest effort to date has been expended on the problem of assessing and predicting the reliability growth which takes place as faults are found and fixed, so the greater part of the paper addresses this problem. We begin with a simple conceptual model of the software failure process in order to set the scene and motivate the detailed stochastic models which follow. This conceptual model suggests certain minimal characteristics which all growth models for software should possess. There are now several detailed models which aim to represent software reliability growth, but their accuracy of prediction seems to vary greatly from one application to another. As it is not possible to decide a priori which will give the most accurate answers for a particular context, the potential user is faced with a dilemma. There seems to be no alternative to analysing the predictive accuracy on the data source under examination and selecting for the current prediction that model which has demonstrated greatest accuracy on earlier predictions for that data. Some ways in which this selection can be effected are described in the paper. It turns out that examination of accuracy of past predictions can be used to improve future predictions by a simple recalibration procedure. Sometimes this technique works dramatically well, and results are shown for some real software failure data. Finally, there is a brief discussion of some wider issues which are not covered by a simple reliability growth study. These include cost modelling, the evaluation of software engineering methodologies, the relationship between testing and reliability, and the important issues of ultra-high reliability and safety-critical systems. On the last point, a warning note is sounded on the wisdom of building systems which depend on software having a very high reliability; this will be very hard to achieve and even harder to demonstrate.


Author(s):  
Vishal Pradhan ◽  
Ajay Kumar ◽  
Joydip Dhar

The fault reduction factor (FRF) is a significant parameter for controlling the software reliability growth. It is the ratio of net fault correction to the number of failures encountered. In literature, many factors affect the behaviour of FRF, namely fault dependency, debugging time-lag, human learning behaviour and imperfect debugging. Besides this, several distributions, for example, inflection S-shaped, Weibull and Exponentiated-Weibull, are used as FRF. However, these standard distributions are not flexible to describe the observed behaviour of FRFs. This paper proposes three different software reliability growth models (SRGMs), which incorporate a three-parameter generalized inflection S-shaped (GISS) distribution as FRF. To model realistic SRGMs, time lags between fault detection and fault correction processes are also incorporated. This study proposed two models for the single release, whereas the third model is designed for multi-release software. Moreover, the first model is in perfect debugging, while the rest of the two are in an imperfect debugging environment. The extensive experiments are conducted for the proposed models with six single release and one multi-release data-sets. The choice of GISS distribution as an FRF improves the software reliability evaluation in comparison with the existing systems in the literature. Finally, the development cost and optimal release time are calculated in a perfect debugging environment.


2018 ◽  
Vol 8 (9) ◽  
pp. 1483 ◽  
Author(s):  
Da Lee ◽  
In Chang ◽  
Hoang Pham ◽  
Kwang Song

The goal set by software developers is to develop high quality and reliable software products. During the past decades, software has become complex, and thus, it is difficult to develop stable software products. Software failures often cause serious social or economic losses, and therefore, software reliability is considered important. Software reliability growth models (SRGMs) have been used to estimate software reliability. In this work, we introduce a new software reliability model and compare it with several non-homogeneous Poisson process (NHPP) models. In addition, we compare the goodness of fit for existing SRGMs using actual data sets based on eight criteria. The results allow us to determine which model is optimal.


Author(s):  
SWAPNA S. GOKHALE

Reliability of a software application, its failure intensity and the residual number of faults are three important metrics that provide a quantitative assessment of the failure characteristics of an application. Ultimately, it is also necessary, based on these metrics, to determine an optimal release time at which costs justify the stop test decision. Typically, one of the many stochastic models known as software reliability growth models (SRGMs) is used to characterize the failure behavior of an application to provide estimates of the failure intensity, residual number of faults, reliability, and optimal release time and cost. To ensure analytical tractability, SRGMs assume instantaneous repair and thus the estimates of these metrics obtained using SRGMs tend to be optimistic. In practice, repair activity consumes a non trivial amount of time and resources. Also, repair may be conducted according to many policies which reflect the schedule and budget constraints of a project. A few efforts which incorporate repair into SRGMs are restrictive, since they consider only some SRGMs, model the repair process using a constant repair rate, and provide an estimate of only the residual number of faults. These efforts do not address the issue of estimating the failure intensity, reliability and optimal release time and cost in the presence of repair. In this paper we present a generic framework based on the rate-based simulation technique to incorporate repair policies into finite failure non homogeneous Poisson process (NHPP) class of SRGMs. We describe a methodology to compute the failure intensity and reliability in the presence of repair, and apply it to four popular finite failure NHPP models. We also present an economic cost model which considers explicit repair in providing estimates of optimal release time and cost. We illustrate the potential of the framework to quantify the impact of the parameters of the repair policies on the above metrics using examples. Through these examples we discuss how the framework could be used to guide the allocation of resources to achieve the desired reliability target in a cost-effective manner.


Sign in / Sign up

Export Citation Format

Share Document