scholarly journals SRGM Dependent Learning based Testing Effort for Refinement

Author(s):  
S. Rumana Firdose

Abstract: During the development of software code there is a pressing necessity to remove the faults or bugs and improve software reliability. To get the accurate result, in every phase of software development cycle assessments needs to be happen, so that in each phase early bugs detection takes place that leads to maintain accuracy at each level. The academic institutions and industries are enhancing the development techniques in software engineering and their by performing regular testing for finding faults in programmers of software during the development. New programs are composed by altered the original code by comprised more of a bias near statements that arise in pessimistic execution paths. Fault localization information technique is used in proposed method to indicate the position of fault. In experimental as well as regression based equations represent the soft computing techniques results is better compare to the other techniques. Evaluation of soft-computing techniques represented that accuracy of the ANN model is superior to the other models. Data bases for performing the training and testing stages were collected, these soft computing techniques had low computational errors than the empirical equations. Finally says that soft computing models are better compare to the regression models. Hence, finding faults and correcting a serious software problem would be better instead of recalling thousands of products, especially in automotive sector. SRGM success mainly reliable by gathering the accurate failure information. The functions of the software reliability growth model were predicted in terms of such information gathered only. SRGM techniques in the literature and it gives a reasonable capability of value for actual software failure data. Therefore, this model, in future, can be applied to operate a wide range of software and its applications. Keywords: SRGM, FDP, FCP

Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 521 ◽  
Author(s):  
Song ◽  
Chang ◽  
Pham

The non-homogeneous Poisson process (NHPP) software has a crucial role in computer systems. Furthermore, the software is used in various environments. It was developed and tested in a controlled environment, while real-world operating environments may be different. Accordingly, the uncertainty of the operating environment must be considered. Moreover, predicting software failures is commonly an important part of study, not only for software developers, but also for companies and research institutes. Software reliability model can measure and predict the number of software failures, software failure intervals, software reliability, and failure rates. In this paper, we propose a new model with an inflection factor of the fault detection rate function, considering the uncertainty of operating environments and analyzing how the predicted value of the proposed new model is different than the other models. We compare the proposed model with several existing NHPP software reliability models using real software failure datasets based on ten criteria. The results show that the proposed new model has significantly better goodness-of-fit and predictability than the other models.


Author(s):  
D. DAMODARAN ◽  
B. RAVIKUMAR ◽  
VELIMUTHU RAMACHANDRAN

Reliability statistics is divided into two mutually exclusive camps and they are Bayesian and Classical. The classical statistician believes that all distribution parameters are fixed values whereas Bayesians believe that parameters are random variables and have a distribution of their own. Bayesian approach has been applied for the Software Failure data and as a result of that several Bayesian Software Reliability Models have been formulated for the last three decades. A Bayesian approach to software reliability measurement was taken by Littlewood and Verrall [A Bayesian reliability growth model for computer software, Appl. Stat. 22 (1973) 332–346] and they modeled hazard rate as a random variable. In this paper, a new Bayesian software reliability model is proposed by combining two prior distributions for predicting the total number of failures and the next failure time of the software. The popular and realistic Jelinski and Moranda (J&M) model is taken as a base for bringing out this model by applying Bayesian approach. It is assumed that one of the parameters of JM model N, number of faults in the software follows uniform prior distribution and another failure rate parameter Φi follows gama prior distribution. The joint prior p(N, Φi) is obtained by combining the above two prior distributions. In this Bayesian model, the time between failures follow exponential distribution with failure rate parameter with stochastically decreasing order on successive failure time intervals. The reasoning for the assumption on the parameter is that the intention of the software tester to improve the software quality by the correction of each failure. With Bayesian approach, the predictive distribution has been arrived at by combining exponential Time between Failures (TBFs) and joint prior p(N, Φi). For the parameter estimation, maximum likelihood estimation (MLE) method has been adopted. The proposed Bayesian software reliability model has been applied to two sets of act. The proposed model has been applied to two sets of actual software failure data and it has been observed that the predicted failure times as per the proposed model are closer to the actual failure times. The predicted failure times based on Littlewood–Verall (LV) model is also computed. Sum of square errors (SSE) criteria has been used for comparing the actual time between failures and predicted time between failures based on proposed model and LV model.


Author(s):  
Jasmine Kaur ◽  
Adarsh Anand ◽  
Ompal Singh ◽  
Vijay Kumar

Patching service provides software firms an option to deal with the leftover bugs and is thereby helping them to keep a track of their product. More and more software firms are making use of this concept of prolonged testing. But this framework of releasing unprepared software in market involves a huge risk. The hastiness of vendors in releasing software patch at times can be dangerous as there are chances that firms release an infected patch. The infected patch (es) might lead to a hike in bug occurrence and error count and might make the software more vulnerable. The current work presents an understanding of such situation through mathematical modeling framework; wherein, the distinct behavior of testers (during in-house testing and field testing) and users is described. The proposed model has been validated on two software failure data sets of Tandem Computers and Brazilian Electronic Switching System, TROPICO R-1500.


2013 ◽  
Vol 462-463 ◽  
pp. 1097-1101
Author(s):  
Jun Ai ◽  
Jing Wei Shang ◽  
Yang Liu

The technology of software reliability quantitative assessment (SRQA) is based on failure data collected in software reliability test or actual use. However, software reliability testing is a long test cycle and difficult to collect enough failure data, which limits SRQA in the actual project. A large number of software failure found from the software growth test cant be used because the process has nothing to do with the actual use or no record of failure time. In this paper, software reliability virtual testing technology based on software conventional failure data is presented. According to the internal data association between input space of software reliability test and failure data found in conventional software testing, a data matching algorithm is proposed to obtain possible failure time in software reliability testing by matching conventional failure data and the input space. Finally, the imitate engine control software is used as the experimental subject to verify the feasibility and effectiveness of the method.


Author(s):  
NORMAN SCHNEIDEWIND

Feedback control systems are used in many walks of life, including automobiles, airplanes, and nuclear reactors. These are all physical systems, albeit with a considerable does of software. It occurred to us that there is no reason that feedback control systems could not be applied to the software process, specifically dealing with reliability analysis, test, and prediction. Thus, we constructed a model of such a system and analyzed whether feedback control, in the form of error signals representing deviations from desired behavior, could bring observed behavior in conformance with specifications. To conduct the experiment, we used NASA Space Shuttle software failure data and analyzed the feedback when no faults were removed versus removing faults. In making this evaluation two software reliability models were used: the Musa Logarithmic Model and the Schneidewind Model. In general, feedback based on fault removal allowed the software reliability process to provide more accurate predictions and, hence, finer control over the process.


2013 ◽  
Vol 634-638 ◽  
pp. 3998-4003
Author(s):  
Qiu Ying Li ◽  
Hui Qi Zhang

The software reliability failure data is the foundation of the software reliability’s quantitative evaluation based on the failure data, and it has an important influence on the accuracy of reliability evaluation. But there are always noises in the original software reliability failure data and make the reliability evaluation accuracy affected. This paper put forward the collecting method of reliability failure data and data preprocessing method including data cleaning and data analysis method, which based on the analysis of the importance and the source of failure data in the software reliability testing and the classification of software failure data. Finally through an example, it displayed the reduction of data noises and the promotion of data quality which produced by the preprocessing methods.


Resources ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 156 ◽  
Author(s):  
Oluwaseun Oyebode ◽  
Desmond Eseoghene Ighravwe

Previous studies have shown that soft computing models are excellent predictive models for demand management problems. However, their applications in solving water demand forecasting problems have been scantily reported. In this study, feedforward artificial neural networks (ANNs) and a support vector machine (SVM) were used to forecast water consumption. Two ANN models were trained using different algorithms: differential evolution (DE) and conjugate gradient (CG). The performance of these soft computing models was investigated with real-world data sets from the City of Ekurhuleni, South Africa, and compared with conventionally used exponential smoothing (ES) and multiple linear regression (MLR). The results obtained showed that the ANN model that was trained with DE performed better than the CG-trained ANN and other predictive models (SVM, ES and MLR). This observation further demonstrates the robustness of evolutionary computation techniques amongst soft computing techniques.


Sign in / Sign up

Export Citation Format

Share Document