defect prediction
Recently Published Documents


TOTAL DOCUMENTS

1232
(FIVE YEARS 582)

H-INDEX

56
(FIVE YEARS 11)

2022 ◽  
Vol 27 (1) ◽  
pp. 41-57
Author(s):  
Shiqi Tang ◽  
Song Huang ◽  
Changyou Zheng ◽  
Erhu Liu ◽  
Cheng Zong ◽  
...  

2022 ◽  
Vol 31 (1) ◽  
pp. 1-26
Author(s):  
Davide Falessi ◽  
Aalok Ahluwalia ◽  
Massimiliano DI Penta

Defect prediction models can be beneficial to prioritize testing, analysis, or code review activities, and has been the subject of a substantial effort in academia, and some applications in industrial contexts. A necessary precondition when creating a defect prediction model is the availability of defect data from the history of projects. If this data is noisy, the resulting defect prediction model could result to be unreliable. One of the causes of noise for defect datasets is the presence of “dormant defects,” i.e., of defects discovered several releases after their introduction. This can cause a class to be labeled as defect-free while it is not, and is, therefore “snoring.” In this article, we investigate the impact of snoring on classifiers' accuracy and the effectiveness of a possible countermeasure, i.e., dropping too recent data from a training set. We analyze the accuracy of 15 machine learning defect prediction classifiers, on data from more than 4,000 defects and 600 releases of 19 open source projects from the Apache ecosystem. Our results show that on average across projects (i) the presence of dormant defects decreases the recall of defect prediction classifiers, and (ii) removing from the training set the classes that in the last release are labeled as not defective significantly improves the accuracy of the classifiers. In summary, this article provides insights on how to create defects datasets by mitigating the negative effect of dormant defects on defect prediction.


2022 ◽  
Vol 12 (1) ◽  
pp. 493
Author(s):  
Mahesha Pandit ◽  
Deepali Gupta ◽  
Divya Anand ◽  
Nitin Goyal ◽  
Hani Moaiteq Aljahdali ◽  
...  

Using artificial intelligence (AI) based software defect prediction (SDP) techniques in the software development process helps isolate defective software modules, count the number of software defects, and identify risky code changes. However, software development teams are unaware of SDP and do not have easy access to relevant models and techniques. The major reason for this problem seems to be the fragmentation of SDP research and SDP practice. To unify SDP research and practice this article introduces a cloud-based, global, unified AI framework for SDP called DePaaS—Defects Prediction as a Service. The article describes the usage context, use cases and detailed architecture of DePaaS and presents the first response of the industry practitioners to DePaaS. In a first of its kind survey, the article captures practitioner’s belief into SDP and ability of DePaaS to solve some of the known challenges of the field of software defect prediction. This article also provides a novel process for SDP, detailed description of the structure and behaviour of DePaaS architecture components, six best SDP models offered by DePaaS, a description of algorithms that recommend SDP models, feature sets and tunable parameters, and a rich set of challenges to build, use and sustain DePaaS. With the contributions of this article, SDP research and practice could be unified enabling building and using more pragmatic defect prediction models leading to increase in the efficiency of software testing.


2022 ◽  
Vol 31 (2) ◽  
pp. 1287-1300
Author(s):  
Mohammad Sh. Daoud ◽  
Shabib Aftab ◽  
Munir Ahmad ◽  
Muhammad Adnan Khan ◽  
Ahmed Iqbal ◽  
...  

IET Software ◽  
2021 ◽  
Author(s):  
Qingan Huang ◽  
Le Ma ◽  
Siyu Jiang ◽  
Guobin Wu ◽  
Hengjie Song ◽  
...  

2021 ◽  
pp. 1-15
Author(s):  
Nayeem Ahmad Bhat ◽  
Sheikh Umar Farooq

Prediction approaches used for cross-project defect prediction (CPDP) are usually impractical because of high false alarms, or low detection rate. Instance based data filter techniques that improve the CPDP performance are time-consuming and each time a new test set arrives for prediction the entire filter procedure is repeated. We propose to use local modeling approach for the utilization of ever-increasing cross-project data for CPDP. We cluster the cross-project data, train per cluster prediction models and predict the target test instances using corresponding cluster models. Over 7 NASA Data sets performance comparison using statistical methods between within-project, cross-project, and our local modeling approach were performed. Compared to within-project prediction the cross-project prediction increased the probability of detection (PD) associated with an increase in the probability of false alarm (PF) and decreased overall performance Balance. The application of local modeling decreased the (PF) associated with a decrease in (PD) and an overall performance improvement in terms of Balance. Moreover, compared to one state of the art filter technique – Burak filter, our approach is simple, fast, performance comparable, and opens a new perspective for the utilization of ever-increasing cross-project data for defect prediction. Therefore, when insufficient within-project data is available we recommend training local cluster models than training a single global model on cross-project datasets.


2021 ◽  
Vol 113 ◽  
pp. 107870
Author(s):  
Md Alamgir Kabir ◽  
Jacky Keung ◽  
Burak Turhan ◽  
Kwabena Ebo Bennin

2021 ◽  
Author(s):  
Yu Tang ◽  
Qi Dai ◽  
Mengyuan Yang ◽  
Lifang Chen

Abstract For the traditional ensemble learning algorithm of software defect prediction, the base predictor exists the problem that too many parameters are difficult to optimize, resulting in the optimized performance of the model unable to be obtained. An ensemble learning algorithm for software defect prediction that is proposed by using the improved sparrow search algorithm to optimize the extreme learning machine, which divided into three parts. Firstly, the improved sparrow search algorithm (ISSA) is proposed to improve the optimization ability and convergence speed, and the performance of the improved sparrow search algorithm is tested by using eight benchmark test functions. Secondly, ISSA is used to optimize extreme learning machine (ISSA-ELM) to improve the prediction ability. Finally, the optimized ensemble learning algorithm (ISSA-ELM-Bagging) is presented in the Bagging algorithm which improve the prediction performance of ELM in software defect datasets. Experiments are carried out in six groups of software defect datasets. The experimental results show that ISSA-ELM-Bagging ensemble learning algorithm is significantly better than the other four comparison algorithms under the six evaluation indexes of Precision, Recall, F-measure, MCC, Accuracy and G-mean, which has better stability and generalization ability.


Sign in / Sign up

Export Citation Format

Share Document