Two-Dimensional Multi-Release Software Reliability Modeling for Fault Detection and Fault Correction Processes

Author(s):  
Vijay Kumar ◽  
Paridhi Mathur ◽  
Ramita Sahni ◽  
Mohit Anand

With the growing competition and the demand of the customers, a software organization needs to regularly provide up-gradations and add features to its existing version of software. For the organization, creating these software upgrades means an increase in the complexity of the software which in turn leads to the increase in the number of faults. Also, the faults left undetected in the previous version need to be addressed in this phase. Many software reliability growth models have been proposed to model the phenomenon of multi-release problems using two stage failure observation and correction processes. The model proposed in this paper partitions the fault removal process into a two-stage process which includes fault detection process and fault removal process considering the joint effect of premeditated release pressure and resource restrictions using a well-known Cobb–Douglas production function for the multi release problem of a software. The faults detected in the operational phase of the previous release or left incomplete are also incorporated in the next release. A generalized framework for the multi-release problem in which fault detection follows an exponential distribution function and fault correction follows Gamma distribution function is proposed and verified on a real data set of four releases of software. The estimated parameters and comparison criteria are also given.

2021 ◽  
Vol 11 (15) ◽  
pp. 6998
Author(s):  
Qiuying Li ◽  
Hoang Pham

Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.


Author(s):  
P. K. KAPUR ◽  
SUNIL K. KHATRI ◽  
MASHAALLAH BASIRZADEH

With growth in demand for zero defects, predicting reliability of software products is gaining importance. Software Reliability Growth Models (SRGM) are used to estimate the reliability of a software product. We have a large number of SRGM; however none of them works across different environments. Recently, Artificial Neural Networks have been applied in software reliability assessment and software reliability growth prediction. In most of the existing research available in the literature, it is considered that similar testing effort is required on each debugging effort. However, in practice, different amount of testing efforts may be required for detection and removal of different type of faults on basis of their complexity. Consequently, faults are classified into three categories on basis of complexity: simple, hard and complex. In this paper we apply neural network methods to build software reliability growth models (SRGM) considering faults of different complexity. Logistic learning function accounting for the expertise gained by the testing team is used for modeling the proposed model. The proposed model assumes that in the simple faults the growth in removal process is uniform whereas, for hard and complex faults, removal process follows logistic growth curve due to the fact that learning of removal team grows as testing progresses. The proposed model has been validated, evaluated and compared with other NHPP model by applying it on two failure/fault removal data sets cited from real software development projects. The results show that the proposed model with logistic function provides improved goodness-of-fit for software failure/fault removal data.


Filomat ◽  
2018 ◽  
Vol 32 (17) ◽  
pp. 5931-5947
Author(s):  
Hatami Mojtaba ◽  
Alamatsaz Hossein

In this paper, we propose a new transformation of circular random variables based on circular distribution functions, which we shall call inverse distribution function (id f ) transformation. We show that M?bius transformation is a special case of our id f transformation. Very general results are provided for the properties of the proposed family of id f transformations, including their trigonometric moments, maximum entropy, random variate generation, finite mixture and modality properties. In particular, we shall focus our attention on a subfamily of the general family when id f transformation is based on the cardioid circular distribution function. Modality and shape properties are investigated for this subfamily. In addition, we obtain further statistical properties for the resulting distribution by applying the id f transformation to a random variable following a von Mises distribution. In fact, we shall introduce the Cardioid-von Mises (CvM) distribution and estimate its parameters by the maximum likelihood method. Finally, an application of CvM family and its inferential methods are illustrated using a real data set containing times of gun crimes in Pittsburgh, Pennsylvania.


2005 ◽  
Vol 30 (4) ◽  
pp. 369-396 ◽  
Author(s):  
Eisuke Segawa

Multi-indicator growth models were formulated as special three-level hierarchical generalized linear models to analyze growth of a trait latent variable measured by ordinal items. Items are nested within a time-point, and time-points are nested within subject. These models are special because they include factor analytic structure. This model can analyze not only data with item- and time-level missing observations, but also data with time points freely specified over subjects. Furthermore, features useful for longitudinal analyses, “autoregressive error degree one” structure for the trait residuals and estimated time-scores, were included. The approach is Bayesian with Markov Chain and Monte Carlo, and the model is implemented in WinBUGS. They are illustrated with two simulated data sets and one real data set with planned missing items within a scale.


Author(s):  
Anshul Tickoo ◽  
Ajit K. Verma ◽  
Sunil K. Khatri ◽  
P. K. Kapur

Across the globe almost every organization is dependent on information technology for increasing their business efficiency. This has led to huge demand for reliable and good quality software. Innovation is most important to attain success in software industry. Software companies need to keep bringing upgradations or add-ons in the software to compete in the market. In the present framework we propose a generalized mathematical modeling for multiple software releases. In this framework we examine the collective effect of testing time and resources using Cobb–Douglas production function for defining the failure phenomenon using a software reliability growth model (SRGM). In this paper, we consider the practical scenario where there is a possibility of change in the fault detection rate. Fault detection rate can be affected by various factors like testing environment, testing strategy and allocation of resources. Change in these factors during testing phase can lead to increase or decrease in failure intensity function. The time point at which abrupt fluctuations in fault detection rate take place is known as change point. In this paper, a generalized framework for developing a two-dimensional SRGM with change point for multiple software releases has been discussed. We have derived various existing change point models using the proposed framework. The developed models have been validated on real data set.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Bijamma Thomas ◽  
Midhu Narayanan Nellikkattu ◽  
Sankaran Godan Paduthol

We study a class of software reliability models using quantile function. Various distributional properties of the class of distributions are studied. We also discuss the reliability characteristics of the class of distributions. Inference procedures on parameters of the model based on L-moments are studied. We apply the proposed model to a real data set.


Author(s):  
Abhishek Tandon ◽  
Anu G. Aggarwal ◽  
Nidhi Nijhawan

In an environment of intense competition, software upgrades have become the necessity for the survival in software industry. In this paper, the authors propose a discrete Software Reliability Growth Model (SRGM) for the software with successive releases by taking into consideration the realistic assumption that Fault Removal Rate (FRR) may not remain constant during the testing process, it changes due to severity of faults detected and due to change in strategies adapted by testing team and the time point where FRR changes is called the Change Point. Many researchers have developed SRGMs incorporating the concept of Change Point for single release software. The proposed model aims to present multi release software reliability modeling with change point. Discrete logistic distribution function has been used to model relationship between features enhancement and fault removal. It is helpul in developing a flexible SRGM, which is S-shaped in nature. In order to evaluate the proposed SRGM, parameter estimation is done using the real life data set for software with four releases and the goodness-of-fit of this model is analyzed.


Author(s):  
Md. Asraful Haque ◽  
Nesar Ahmad

Background: Software Reliability Growth Models (SRGMs) are most widely used mathematical models to monitor, predict and assess the software reliability. They play an important role in industries to estimate the release time of a software product. Since 1970s, researchers have suggested a large number of SRGMs to forecast software reliability based on certain assumptions. They all have explained how the system reliability changes over time by analyzing failure data set throughout the testing process. However, none of the models is universally accepted and can be used for all kinds of software. Objective: The objective of this paper is to highlight the limitations of SRGMs and to suggest a novel approach towards the improvement. Method: We have presented the mathematical basis, parameters and assumptions of software reliability model and analyzed five popular models namely Jelinski-Moranda (J-M) Model, Goel Okumoto NHPP Model, Musa-Okumoto Log Poisson Model, Gompertz Model and Enhanced NHPP Model. Conclusion: The paper focuses on the many challenges like flexibility issues, assumptions, and uncertainty factors of using SRGMs. It emphasizes considering all affecting factors in reliability calculation. A possible approach has been mentioned at the end of the paper.


Sign in / Sign up

Export Citation Format

Share Document