AN ASSESSMENT OF TESTING COST WITH EFFORT-DEPENDENT FDP AND FCP UNDER LEARNING EFFECT: A GENETIC ALGORITHM APPROACH

Author(s):  
VIJAY KUMAR ◽  
SUNIL KUMAR KHATRI ◽  
HITESH DUA ◽  
MANISHA SHARMA ◽  
PARIDHI MATHUR

Software testing involves verification and validation of the software to meet the requirements elucidated by customers in the earlier phases and to subsequently increase software reliability. Around half of the resources, such as manpower and CPU time are consumed and a major portion of the total cost of developing the software is incurred in testing phase, making it the most crucial and time-consuming phase of a software development lifecycle (SDLC). Also the fault detection process (FDP) and fault correction process (FCP) are the important processes in SDLC. A number of software reliability growth models (SRGM) have been proposed in the last four decades to capture the time lag between detected and corrected faults. But most of the models are discussed under static environment. The purpose of this paper is to allocate the resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. An elaborate optimization policy based on optimal control theory for resource allocation with the objective to minimize the cost is proposed. Further, genetic algorithm is applied to obtain the optimum value of detection and correction efforts which minimizes the cost. Numerical example is given in support of the above theoretical result. The experimental results help the project manager to identify the contribution of model parameters and their weight.

2021 ◽  
Vol 11 (15) ◽  
pp. 6998
Author(s):  
Qiuying Li ◽  
Hoang Pham

Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.


2021 ◽  
Vol 9 (3) ◽  
pp. 23-41
Author(s):  
Nesar Ahmad ◽  
Aijaz Ahmad ◽  
Sheikh Umar Farooq

Software reliability growth models (SRGM) are employed to aid us in predicting and estimating reliability in the software development process. Many SRGM proposed in the past claim to be effective over previous models. While some earlier research had raised concern regarding use of delayed S-shaped SRGM, researchers later indicated that the model performs well when appropriate testing-effort function (TEF) is used. This paper proposes and evaluates an approach to incorporate the log-logistic (LL) testing-effort function into delayed S-shaped SRGMs with imperfect debugging based on non-homogeneous Poisson process (NHPP). The model parameters are estimated by weighted least square estimation (WLSE) and maximum likelihood estimation (MLE) methods. The experimental results obtained after applying the model on real data sets and statistical methods for analysis are presented. The results obtained suggest that performance of the proposed model is better than the other existing models. The authors can conclude that the log-logistic TEF is appropriate for incorporating into delayed S-shaped software reliability growth models.


Author(s):  
Kuldeep CHAUDHARY ◽  
P. C. JHA

In this paper, we discuss modular software system for Software Reliability Growth Models using testing effort and study the optimal testing effort intensity for each module. The main goal is to minimize the cost of software development when budget constraint on testing expenditure is given. We discuss the evolution of faults removal dynamics in incorporating the idea of leading /independent and dependent faults in modular software system under the assumption that testing of each of the modulus is done independently. The problem is formulated as an optimal control problem and the solution to the proposed problem has been obtained by using Pontryagin Maximum Principle.


Author(s):  
Vishal Pradhan ◽  
Ajay Kumar ◽  
Joydip Dhar

The fault reduction factor (FRF) is a significant parameter for controlling the software reliability growth. It is the ratio of net fault correction to the number of failures encountered. In literature, many factors affect the behaviour of FRF, namely fault dependency, debugging time-lag, human learning behaviour and imperfect debugging. Besides this, several distributions, for example, inflection S-shaped, Weibull and Exponentiated-Weibull, are used as FRF. However, these standard distributions are not flexible to describe the observed behaviour of FRFs. This paper proposes three different software reliability growth models (SRGMs), which incorporate a three-parameter generalized inflection S-shaped (GISS) distribution as FRF. To model realistic SRGMs, time lags between fault detection and fault correction processes are also incorporated. This study proposed two models for the single release, whereas the third model is designed for multi-release software. Moreover, the first model is in perfect debugging, while the rest of the two are in an imperfect debugging environment. The extensive experiments are conducted for the proposed models with six single release and one multi-release data-sets. The choice of GISS distribution as an FRF improves the software reliability evaluation in comparison with the existing systems in the literature. Finally, the development cost and optimal release time are calculated in a perfect debugging environment.


Sign in / Sign up

Export Citation Format

Share Document