scholarly journals Measuring software reliability under the influence of an infected patch

Author(s):  
Jasmine Kaur ◽  
Adarsh Anand ◽  
Ompal Singh ◽  
Vijay Kumar

Patching service provides software firms an option to deal with the leftover bugs and is thereby helping them to keep a track of their product. More and more software firms are making use of this concept of prolonged testing. But this framework of releasing unprepared software in market involves a huge risk. The hastiness of vendors in releasing software patch at times can be dangerous as there are chances that firms release an infected patch. The infected patch (es) might lead to a hike in bug occurrence and error count and might make the software more vulnerable. The current work presents an understanding of such situation through mathematical modeling framework; wherein, the distinct behavior of testers (during in-house testing and field testing) and users is described. The proposed model has been validated on two software failure data sets of Tandem Computers and Brazilian Electronic Switching System, TROPICO R-1500.

Author(s):  
D. DAMODARAN ◽  
B. RAVIKUMAR ◽  
VELIMUTHU RAMACHANDRAN

Reliability statistics is divided into two mutually exclusive camps and they are Bayesian and Classical. The classical statistician believes that all distribution parameters are fixed values whereas Bayesians believe that parameters are random variables and have a distribution of their own. Bayesian approach has been applied for the Software Failure data and as a result of that several Bayesian Software Reliability Models have been formulated for the last three decades. A Bayesian approach to software reliability measurement was taken by Littlewood and Verrall [A Bayesian reliability growth model for computer software, Appl. Stat. 22 (1973) 332–346] and they modeled hazard rate as a random variable. In this paper, a new Bayesian software reliability model is proposed by combining two prior distributions for predicting the total number of failures and the next failure time of the software. The popular and realistic Jelinski and Moranda (J&M) model is taken as a base for bringing out this model by applying Bayesian approach. It is assumed that one of the parameters of JM model N, number of faults in the software follows uniform prior distribution and another failure rate parameter Φi follows gama prior distribution. The joint prior p(N, Φi) is obtained by combining the above two prior distributions. In this Bayesian model, the time between failures follow exponential distribution with failure rate parameter with stochastically decreasing order on successive failure time intervals. The reasoning for the assumption on the parameter is that the intention of the software tester to improve the software quality by the correction of each failure. With Bayesian approach, the predictive distribution has been arrived at by combining exponential Time between Failures (TBFs) and joint prior p(N, Φi). For the parameter estimation, maximum likelihood estimation (MLE) method has been adopted. The proposed Bayesian software reliability model has been applied to two sets of act. The proposed model has been applied to two sets of actual software failure data and it has been observed that the predicted failure times as per the proposed model are closer to the actual failure times. The predicted failure times based on Littlewood–Verall (LV) model is also computed. Sum of square errors (SSE) criteria has been used for comparing the actual time between failures and predicted time between failures based on proposed model and LV model.


Author(s):  
P. ROY ◽  
G. S. MAHAPATRA ◽  
K. N. DEY

In this paper, we propose a non-homogeneous Poisson process (NHPP) based software reliability growth model (SRGM) in the presence of modified imperfect debugging and fault generation phenomenon. The testing team may not be able to remove a fault perfectly on observation of a failure due to the complexity of software systems and incomplete understanding of software, and the original fault may remain, or get replaced by another fault causing error generation. We have proposed an exponentially increasing fault content function and constant fault detection rate. The total fault content of the software for our proposed model increases rapidly at the beginning of the testing process. It grows gradually at the end of the testing process because of increasing efficiency of the testing team with testing time. We use the maximum likelihood estimation method to estimate the unknown parameters of the proposed model. The applicability of our proposed model and comparisons with established models in terms of goodness of fit and predictive validity have been presented using five known software failure data sets. Experimental results show that the proposed model gives a better fit to the real failure data sets and predicts the future behavior of software development more accurately than the traditional SRGMs.


2017 ◽  
Vol 2017 ◽  
pp. 1-6 ◽  
Author(s):  
Subburaj Ramasamy ◽  
Indhurani Lakshmanan

Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs) are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP) model. We use Artificial Neural Network (ANN) for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.


Author(s):  
P. K. KAPUR ◽  
ANSHU GUPTA ◽  
P. C. JHA

Since the early 1970's numerous Software Reliability Growth Models (SRGM) have been proposed in the literature to estimate the software reliability measures such as the remaining number of faults, failure rate and reliability growth during the testing phase. These models are applied to the software testing data collected during the testing phase and then are often used to predict the software failures in operational phase. In practice simulating mirror image of the diverse testing environment representative of the operational environment is difficult in practice and hence the simulated testing environment during the testing phase may not be similar to the conditions that exist in the operational phase. During testing phase testing is performed under a controlled environment whereas during the operational phase failure phenomenon depends on the operational environment and usage of software. Therefore an SRGM developed for the testing phase is not suitable for estimating the reliability growth during the operational phase. In this paper, we propose a generalized Software Reliability Growth Model, which can be used to estimate number of faults during the testing phase and can be easily extended to the operation phase. In the testing phase, it is appropriate to estimate the reliability growth with respect to the amount of testing resources spent on testing whereas in the operational phase the amount of effort to be spent on removing a fault reported by a user is fixed by the developer. The number of failures detected and hence the reliability growth during the user phase depends on the usage of software. The proposed model appropriately incorporates these changes. Further we categorize the software into two-categories- (a) project and (b) product type software. Appropriate usage functions are linked to both project and product type software. To describe the fault removal phenomenon, imperfect debugging environment is incorporated into the model building. The paper highlights an interdisciplinary mathematical modeling approach in Software Reliability Engineering and Marketing. The proposed model is validated for both phases using the software failure data sets obtained from different sources. Model describes the failure phenomenon for these data sets fairly.


2013 ◽  
Vol 11 (1) ◽  
pp. 2161-2168
Author(s):  
Sridevi Gutta ◽  
Satya R Prasad

The Reliability of the Software Process can be monitored efficiently using Statistical Process Control (SPC). SPC is the application of statistical techniques to control a process. SPC is a study of the best ways of describing and analyzing the data and then drawing conclusion or inferences based on available data. With the help of SPC the software development team can identify software failure process and find out actions to be taken which assures better software reliability. This paper provides a control mechanism based on the cumulative observations of Interval domain data using mean value function of Pareto type IV distribution, which is based on Non-Homogenous Poisson Process (NHPP). The unknown parameters of the model are estimated using maximum likelihood estimation approach. Besides it also presents an analysis of failure data sets at a particular point and compares Pareto Type II and Pareto Type IV models.


Author(s):  
Shinji Inoue ◽  
Shigeru Yamada

We discuss software reliability modeling reflecting actual situation in a testing phase based on a Markovian software reliability modeling framework. Concretely, we discuss Markovian imperfect debugging modeling for software reliability assessment with multiple changes of testing environment. Testing-time changing the testing environment is called change-point. Taking into account the effect of change-point in software reliability growth modeling is expected to improve the accuracy of software reliability assessment because it is often observed that the stochastic characteristic of software failure-occurrence or fault-detection phenomenon is changed in an actual testing phase. Numerical examples for software reliability assessment based on our proposed approach are also shown by using actual software failure-occurrence time data. Further, we discuss the usefulness of considering the effect of the imperfect debugging and the multiple change-point into software reliability modeling by comparing the estimated behavior of the mean time between software failures based on our model and the existing related models.


2013 ◽  
Vol 462-463 ◽  
pp. 1097-1101
Author(s):  
Jun Ai ◽  
Jing Wei Shang ◽  
Yang Liu

The technology of software reliability quantitative assessment (SRQA) is based on failure data collected in software reliability test or actual use. However, software reliability testing is a long test cycle and difficult to collect enough failure data, which limits SRQA in the actual project. A large number of software failure found from the software growth test cant be used because the process has nothing to do with the actual use or no record of failure time. In this paper, software reliability virtual testing technology based on software conventional failure data is presented. According to the internal data association between input space of software reliability test and failure data found in conventional software testing, a data matching algorithm is proposed to obtain possible failure time in software reliability testing by matching conventional failure data and the input space. Finally, the imitate engine control software is used as the experimental subject to verify the feasibility and effectiveness of the method.


Mathematics ◽  
2019 ◽  
Vol 7 (12) ◽  
pp. 1215 ◽  
Author(s):  
Hoang Pham

Selecting the best model from a set of candidates for a given set of data is obviously not an easy task. In this paper, we propose a new criterion that takes into account a larger penalty when adding too many coefficients (or estimated parameters) in the model from too small a sample in the presence of too much noise, in addition to minimizing the sum of squares error. We discuss several real applications that illustrate the proposed criterion and compare its results to some existing criteria based on a simulated data set and some real datasets including advertising budget data, newly collected heart blood pressure health data sets and software failure data.


Author(s):  
NORMAN SCHNEIDEWIND

Feedback control systems are used in many walks of life, including automobiles, airplanes, and nuclear reactors. These are all physical systems, albeit with a considerable does of software. It occurred to us that there is no reason that feedback control systems could not be applied to the software process, specifically dealing with reliability analysis, test, and prediction. Thus, we constructed a model of such a system and analyzed whether feedback control, in the form of error signals representing deviations from desired behavior, could bring observed behavior in conformance with specifications. To conduct the experiment, we used NASA Space Shuttle software failure data and analyzed the feedback when no faults were removed versus removing faults. In making this evaluation two software reliability models were used: the Musa Logarithmic Model and the Schneidewind Model. In general, feedback based on fault removal allowed the software reliability process to provide more accurate predictions and, hence, finer control over the process.


2013 ◽  
Vol 634-638 ◽  
pp. 3998-4003
Author(s):  
Qiu Ying Li ◽  
Hui Qi Zhang

The software reliability failure data is the foundation of the software reliability’s quantitative evaluation based on the failure data, and it has an important influence on the accuracy of reliability evaluation. But there are always noises in the original software reliability failure data and make the reliability evaluation accuracy affected. This paper put forward the collecting method of reliability failure data and data preprocessing method including data cleaning and data analysis method, which based on the analysis of the importance and the source of failure data in the software reliability testing and the classification of software failure data. Finally through an example, it displayed the reduction of data noises and the promotion of data quality which produced by the preprocessing methods.


Sign in / Sign up

Export Citation Format

Share Document