scholarly journals Modeling Software Fault-Detection and Fault-Correction Processes by Considering the Dependencies between Fault Amounts

2021 ◽  
Vol 11 (15) ◽  
pp. 6998
Author(s):  
Qiuying Li ◽  
Hoang Pham

Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.

Author(s):  
Vijay Kumar ◽  
Paridhi Mathur ◽  
Ramita Sahni ◽  
Mohit Anand

With the growing competition and the demand of the customers, a software organization needs to regularly provide up-gradations and add features to its existing version of software. For the organization, creating these software upgrades means an increase in the complexity of the software which in turn leads to the increase in the number of faults. Also, the faults left undetected in the previous version need to be addressed in this phase. Many software reliability growth models have been proposed to model the phenomenon of multi-release problems using two stage failure observation and correction processes. The model proposed in this paper partitions the fault removal process into a two-stage process which includes fault detection process and fault removal process considering the joint effect of premeditated release pressure and resource restrictions using a well-known Cobb–Douglas production function for the multi release problem of a software. The faults detected in the operational phase of the previous release or left incomplete are also incorporated in the next release. A generalized framework for the multi-release problem in which fault detection follows an exponential distribution function and fault correction follows Gamma distribution function is proposed and verified on a real data set of four releases of software. The estimated parameters and comparison criteria are also given.


2020 ◽  
Vol 9 (1) ◽  
pp. 61-81
Author(s):  
Lazhar BENKHELIFA

A new lifetime model, with four positive parameters, called the Weibull Birnbaum-Saunders distribution is proposed. The proposed model extends the Birnbaum-Saunders distribution and provides great flexibility in modeling data in practice. Some mathematical properties of the new distribution are obtained including expansions for the cumulative and density functions, moments, generating function, mean deviations, order statistics and reliability. Estimation of the model parameters is carried out by the maximum likelihood estimation method. A simulation study is presented to show the performance of the maximum likelihood estimates of the model parameters. The flexibility of the new model is examined by applying it to two real data sets.


Author(s):  
VIJAY KUMAR ◽  
SUNIL KUMAR KHATRI ◽  
HITESH DUA ◽  
MANISHA SHARMA ◽  
PARIDHI MATHUR

Software testing involves verification and validation of the software to meet the requirements elucidated by customers in the earlier phases and to subsequently increase software reliability. Around half of the resources, such as manpower and CPU time are consumed and a major portion of the total cost of developing the software is incurred in testing phase, making it the most crucial and time-consuming phase of a software development lifecycle (SDLC). Also the fault detection process (FDP) and fault correction process (FCP) are the important processes in SDLC. A number of software reliability growth models (SRGM) have been proposed in the last four decades to capture the time lag between detected and corrected faults. But most of the models are discussed under static environment. The purpose of this paper is to allocate the resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. An elaborate optimization policy based on optimal control theory for resource allocation with the objective to minimize the cost is proposed. Further, genetic algorithm is applied to obtain the optimum value of detection and correction efforts which minimizes the cost. Numerical example is given in support of the above theoretical result. The experimental results help the project manager to identify the contribution of model parameters and their weight.


2015 ◽  
Vol 764-765 ◽  
pp. 979-982
Author(s):  
Jung Hua Lo

Many software reliability growth models (SRGMs) have been developed to estimate some useful measures such as the mean value function, number of remaining faults, and failure detection rate. Most of these models have focused on the failure detection process and not given equal priority to modeling the fault correction process. But, most latent software errors may remain uncorrected for a long time even after they are detected, which increases their impact. The remaining software faults are often one of the most unreliable reasons for software quality. Therefore, we develop a general framework of the modeling of the failure detection and fault correction processes. Furthermore, it is assumed that a detected fault is immediately removed and is perfectly repaired with no new faults being introduced for the traditional SRGMs. In reality, it is impossible to remove all faults from the fault correction process and have a fault-free effect on the software development environment. In order to relax this perfect debugging assumption, we introduce the possibility of imperfect debugging phenomenon. Finally, numerical examples are shown to illustrate the results of the unified approach for integration of the detection and correction process under imperfect debugging.


Author(s):  
Sandeep Chopra ◽  
Lata Nautiyal ◽  
Preeti Malik ◽  
Mangey Ram ◽  
Mahesh K. Sharma

Reliability of a software or system is the probability of system to perform its functions adequately for the stated time period under specific environment conditions. In case of component-based software development reliability estimation is a crucial factor. Existing reliability estimation model falls into two broad categories parametric and non-parametric models. Parametric models approximate the model parameters based on the assumptions of fundamental distributions. Non-parametric models enable parameter estimation of the software reliability growth models without any assumptions. We have proposed a novel non-parametric approach for survival analysis of components. Failure data is collected based on which we have calculated failure rate and reliability of the software. Failure rate increases with the time whereas reliability decreases with the time.


Author(s):  
PARMOD KUMAR KAPUR ◽  
V. S. SARMA YADAVALLI ◽  
SUNIL KUMAR KHATRI ◽  
MASHAALLAH BASIRZADEH

Modeling of software reliability has gained lot of importance in recent years. Use of software-critical applications has led to tremendous increase in amount of work being carried out in software reliability growth modeling. Number of analytic software reliability growth models (SRGM) exists in literature. They are based on some assumptions; however, none of them works well across different environments. The current software reliability literature is inconclusive as to which models and techniques are best, and some researchers believe that each organization needs to try several approaches to determine what works best for them. Data-driven artificial neural-network (ANN) based models, on other side, provide better software reliability estimation. In this paper we present a new dimension to build an ensemble of different ANN to improve the accuracy of estimation for complex software architectures. Model has been validated on two data sets cited from the literature. Results show fair improvement in forecasting software reliability over individual neural-network based models.


2018 ◽  
Vol 62 (9) ◽  
pp. 1301-1312
Author(s):  
Jinyong Wang ◽  
Xiaoping Mi

Abstract Software reliability assessment methods have been changed from closed to open source software (OSS). Although numerous new approaches for improving OSS reliability are formulated, they are not used in practice due to their inaccuracy. A new proposed model considering the decreasing trend of fault detection rate is developed in this study to effectively improve OSS reliability. We analyse the changes of the instantaneous fault detection rate over time by using real-world software fault count data from two actual OSS projects, namely, Apache and GNOME, to validate the proposed model performance. Results show that the proposed model with the decreasing trend of fault detection rate has better fitting and predictive performance than the traditional closed source software and other OSS reliability models. The proposed model for OSS can further accurately fit and predict the failure process and thus can assist in improving the quality of OSS systems in real-world OSS projects.


2021 ◽  
Vol 9 (3) ◽  
pp. 23-41
Author(s):  
Nesar Ahmad ◽  
Aijaz Ahmad ◽  
Sheikh Umar Farooq

Software reliability growth models (SRGM) are employed to aid us in predicting and estimating reliability in the software development process. Many SRGM proposed in the past claim to be effective over previous models. While some earlier research had raised concern regarding use of delayed S-shaped SRGM, researchers later indicated that the model performs well when appropriate testing-effort function (TEF) is used. This paper proposes and evaluates an approach to incorporate the log-logistic (LL) testing-effort function into delayed S-shaped SRGMs with imperfect debugging based on non-homogeneous Poisson process (NHPP). The model parameters are estimated by weighted least square estimation (WLSE) and maximum likelihood estimation (MLE) methods. The experimental results obtained after applying the model on real data sets and statistical methods for analysis are presented. The results obtained suggest that performance of the proposed model is better than the other existing models. The authors can conclude that the log-logistic TEF is appropriate for incorporating into delayed S-shaped software reliability growth models.


Author(s):  
Maskura Nafreen ◽  
Lance Fiondella

Researchers have proposed several software reliability growth models, many of which possess complex parametric forms. In practice, software reliability growth models should exhibit a balance between predictive accuracy and other statistical measures of goodness of fit, yet past studies have not always performed such balanced assessment. This paper proposes a framework for software reliability growth models possessing a bathtub-shaped fault detection rate and derives stable and efficient expectation conditional maximization algorithms to enable the fitting of these models. The stages of the bathtub are interpreted in the context of the software testing process. The illustrations compare multiple bathtub-shaped and reduced model forms, including classical models with respect to predictive and information theoretic measures. The results indicate that software reliability growth models possessing a bathtub-shaped fault detection rate outperformed classical models on both types of measures. The proposed framework and models may therefore be a practical compromise between model complexity and predictive accuracy.


Sign in / Sign up

Export Citation Format

Share Document