An Empirical Study of Heterogeneous Cross-Project Defect Prediction Using Various Statistical Techniques

2021 ◽  
Vol 17 (2) ◽  
pp. 55-71
Author(s):  
Rohit Vashisht ◽  
Syed Afzal Murtaza Rizvi

Cross-project defect prediction (CPDP) forecasts flaws in a target project through defect prediction models (DPM) trained by defect data of another project. However, CPDP has a prevalent problem (i.e., distinct projects must have identical features to describe themselves). This article emphasizes on heterogeneous CPDP (HCPDP) modeling that does not require same metric set between two applications and builds DPM based on metrics showing comparable distribution in their values for a given pair of datasets. This paper evaluates empirically and theoretically HCPDP modeling, which comprises of three main phases: feature ranking and feature selection, metric matching, and finally, predicting defects in the target application. The research work has been experimented on 13 benchmarked datasets of three open source projects. Results show that performance of HCPDP is very much comparable to baseline within project defect prediction (WPDP) and XG boosting classification model gives best results when used in conjunction with Kendall's method of correlation as compared to other set of classifiers.

2020 ◽  
Vol 25 (6) ◽  
pp. 5047-5083
Author(s):  
Abdul Ali Bangash ◽  
Hareem Sahar ◽  
Abram Hindle ◽  
Karim Ali

Author(s):  
Faimison Porto ◽  
Adenilso Da Silva Simao

The defect prediction models can be a good tool on organizing the project´s test resources. The models can be constructed with two main goals: 1) to classify the software parts - defective or not; or 2) to rank the most defective parts in a decreasing order. However, not all companies maintain an appropriate set of historical defect data. In this case, a company can build an appropriate dataset from known external projects - called Cross-project Defect Prediction (CPDP).The CPDP models, however, present low prediction performances due to the heterogeneity of data. Recently, Instance Filtering methods were proposed in order to reduce this heterogeneity by selecting the most similar instances from the training dataset. Originally, the similarity is calculated based on all the available dataset features (or independent variables).We propose that using only the most relevant features on the similarity calculation can result in more accurate filtered datasets and better prediction performances. In this study we extend our previous work. We analyse both prediction goals - Classification and Ranking. We present an empirical evaluation of 41 different methods by associating Instance Filtering methods with Feature Selection methods. We used 36 versions of 11 open source projects on experiments.The results show similar evidences for both prediction goals. First, the defect prediction performance of CPDP models can be improved by associating Feature Selection and Instance Filtering. Second, no evaluated method presented general better performances. Indeed, the most appropriate method can vary according to the characteristics of the project being predicted.


Sign in / Sign up

Export Citation Format

Share Document