source code metrics
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 14)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Sundarakrishnan Ganesh ◽  
Tobias Ohlsson ◽  
Francis Palma

Algorithms ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 289
Author(s):  
Priyadarshni Suresh Sagar ◽  
Eman Abdulah AlOmar ◽  
Mohamed Wiem Mkaouer ◽  
Ali Ouni ◽  
Christian D. Newman

Understanding how developers refactor their code is critical to support the design improvement process of software. This paper investigates to what extent code metrics are good indicators for predicting refactoring activity in the source code. In order to perform this, we formulated the prediction of refactoring operation types as a multi-class classification problem. Our solution relies on measuring metrics extracted from committed code changes in order to extract the corresponding features (i.e., metric variations) that better represent each class (i.e., refactoring type) in order to automatically predict, for a given commit, the method-level type of refactoring being applied, namely Move Method, Rename Method, Extract Method, Inline Method, Pull-up Method, and Push-down Method. We compared various classifiers, in terms of their prediction performance, using a dataset of 5004 commits and extracted 800 Java projects. Our main findings show that the random forest model trained with code metrics resulted in the best average accuracy of 75%. However, we detected a variation in the results per class, which means that some refactoring types are harder to detect than others.


2021 ◽  
pp. 026-035
Author(s):  
A.M. Pokrovskyi ◽  
◽  

The rapid development of software quality measurement methods, the need in efficient and versatile reengineering automatization tools becomes increasingly bigger. This becomes even more apparent when the programming language and respective coding practices slowly develop alongside each other for a long period of time, while the legacy code base grows bigger and remains highly relevant. In this paper, a source code metrics measurement tool for Fortran program quality evaluation is developed. It is implemented as a code module for Photran integrated development environment and based on a set of syntax tree walking algorithms. The module utilizes the built-in Photran syntax analysis engine and the tree data structure which it builds from the source code. The developed tool is also compared to existing source code analysis instruments. The results show that the developed tool is most effective when used in combination with Photran’s built-in refactoring system, and that Photran’s application programming interface facilitates easy scaling of the existing infrastructure by introducing other code analysis methods.


2021 ◽  
pp. 263-276
Author(s):  
Sahithi Tummalapalli ◽  
Juhi Mittal ◽  
Lov Kumar ◽  
Lalitha Bhanu Murthy Neti ◽  
Santanu Kumar Rath

2020 ◽  
Vol 28 (4) ◽  
pp. 1447-1506 ◽  
Author(s):  
Rudolf Ferenc ◽  
Zoltán Tóth ◽  
Gergely Ladányi ◽  
István Siket ◽  
Tibor Gyimóthy

AbstractBug datasets have been created and used by many researchers to build and validate novel bug prediction models. In this work, our aim is to collect existing public source code metric-based bug datasets and unify their contents. Furthermore, we wish to assess the plethora of collected metrics and the capabilities of the unified bug dataset in bug prediction. We considered 5 public datasets and we downloaded the corresponding source code for each system in the datasets and performed source code analysis to obtain a common set of source code metrics. This way, we produced a unified bug dataset at class and file level as well. We investigated the diversion of metric definitions and values of the different bug datasets. Finally, we used a decision tree algorithm to show the capabilities of the dataset in bug prediction. We found that there are statistically significant differences in the values of the original and the newly calculated metrics; furthermore, notations and definitions can severely differ. We compared the bug prediction capabilities of the original and the extended metric suites (within-project learning). Afterwards, we merged all classes (and files) into one large dataset which consists of 47,618 elements (43,744 for files) and we evaluated the bug prediction model build on this large dataset as well. Finally, we also investigated cross-project capabilities of the bug prediction models and datasets. We made the unified dataset publicly available for everyone. By using a public unified dataset as an input for different bug prediction related investigations, researchers can make their studies reproducible, thus able to be validated and verified.


PLoS ONE ◽  
2020 ◽  
Vol 15 (1) ◽  
pp. e0226867
Author(s):  
Sarathkumar Rangarajan ◽  
Huai Liu ◽  
Hua Wang

2019 ◽  
Vol 2019 (2) ◽  
pp. 117-126
Author(s):  
Chinmay Hota ◽  
Lov Kumar ◽  
Lalita Bhanu Murthy Neti

Sign in / Sign up

Export Citation Format

Share Document