scholarly journals Object oriented software metrics threshold values at quantitative acceptable risk level

2014 ◽  
Vol 2 (3) ◽  
pp. 191-205 ◽  
Author(s):  
Satwinder Singh ◽  
K. S. Kahlon
Author(s):  
Raed Shatnawi

BACKGROUND: Fault data is vital to predicting the fault-proneness in large systems. Predicting faulty classes helps in allocating the appropriate testing resources for future releases. However, current fault data face challenges such as unlabeled instances and data imbalance. These challenges degrade the performance of the prediction models. Data imbalance happens because the majority of classes are labeled as not faulty whereas the minority of classes are labeled as faulty. AIM: The research proposes to improve fault prediction using software metrics in combination with threshold values. Statistical techniques are proposed to improve the quality of the datasets and therefore the quality of the fault prediction. METHOD: Threshold values of object-oriented metrics are used to label classes as faulty to improve the fault prediction models The resulting datasets are used to build prediction models using five machine learning techniques. The use of threshold values is validated on ten large object-oriented systems. RESULTS: The models are built for the datasets with and without the use of thresholds. The combination of thresholds with machine learning has improved the fault prediction models significantly for the five classifiers. CONCLUSION: Threshold values can be used to label software classes as fault-prone and can be used to improve machine learners in predicting the fault-prone classes.


2020 ◽  
Vol 17 (1) ◽  
pp. 181-203
Author(s):  
Tina Beranic ◽  
Marjan Hericko

Without reliable software metrics threshold values, the efficient quality evaluation of software could not be done. In order to derive reliable thresholds, we have to address several challenges, which impact the final result. For instance, software metrics implementations vary in various software metrics tools, including varying threshold values that result from different threshold derivation approaches. In addition, the programming language is also another important aspect. In this paper, we present the results of an empirical study aimed at comparing systematically obtained threshold values for nine software metrics in four object-oriented programming languages (i.e., Java, C++, C#, and Python).We addressed challenges in the threshold derivation domain within introduced adjustments of the benchmarkbased threshold derivation approach. The data set was selected in a uniform way, allowing derivation repeatability, while input values were collected using a single software metric tool, enabling the comparison of derived thresholds among the chosen object-oriented programming languages.Within the performed empirical study, the comparison reveals that threshold values differ between different programming languages.


Author(s):  
Kecia A. M. Ferreira ◽  
Mariza A. S. Bigonha ◽  
Roberto S. Bigonha ◽  
Heitor C. Almeida ◽  
Luiz F. O. Mendes

2012 ◽  
Vol 6 ◽  
pp. 420-427 ◽  
Author(s):  
Yeresime Suresh ◽  
Jayadeep Pati ◽  
Santanu Ku Rath

Author(s):  
Dalila Amara ◽  
Latifa Ben Arfa Rabai

Software measurement helps to quantify the quality and the effectiveness of software to find areas of improvement and to provide information needed to make appropriate decisions. In the recent studies, software metrics are widely used for quality assessment. These metrics are divided into two categories: syntactic and semantic. A literature review shows that syntactic ones are widely discussed and are generally used to measure software internal attributes like complexity. It also shows a lack of studies that focus on measuring external attributes like using internal ones. This chapter presents a thorough analysis of most quality measurement concepts. Moreover, it makes a comparative study of object-oriented syntactic metrics to identify their effectiveness for quality assessment and in which phase of the development process these metrics may be used. As reliability is an external attribute, it cannot be measured directly. In this chapter, the authors discuss how reliability can be measured using its correlation with syntactic metrics.


Sign in / Sign up

Export Citation Format

Share Document