A Framework for Measuring and Evaluating Program Source Code Quality

Author(s):  
Hironori Washizaki ◽  
Rieko Namiki ◽  
Tomoyuki Fukuoka ◽  
Yoko Harada ◽  
Hiroyuki Watanabe
Keyword(s):  
2022 ◽  
Vol 31 (2) ◽  
pp. 1-23
Author(s):  
Jevgenija Pantiuchina ◽  
Bin Lin ◽  
Fiorella Zampetti ◽  
Massimiliano Di Penta ◽  
Michele Lanza ◽  
...  

Refactoring operations are behavior-preserving changes aimed at improving source code quality. While refactoring is largely considered a good practice, refactoring proposals in pull requests are often rejected after the code review. Understanding the reasons behind the rejection of refactoring contributions can shed light on how such contributions can be improved, essentially benefiting software quality. This article reports a study in which we manually coded rejection reasons inferred from 330 refactoring-related pull requests from 207 open-source Java projects. We surveyed 267 developers to assess their perceived prevalence of these identified rejection reasons, further complementing the reasons. Our study resulted in a comprehensive taxonomy consisting of 26 refactoring-related rejection reasons and 21 process-related rejection reasons. The taxonomy, accompanied with representative examples and highlighted implications, provides developers with valuable insights on how to ponder and polish their refactoring contributions, and indicates a number of directions researchers can pursue toward better refactoring recommenders.


2016 ◽  
Vol 6 (4) ◽  
pp. 137-150
Author(s):  
Doohwan Kim ◽  
◽  
YooJin Jung ◽  
Jang-Eui Hong

2020 ◽  
Vol 10 (20) ◽  
pp. 7088
Author(s):  
Luka Pavlič ◽  
Marjan Heričko ◽  
Tina Beranič

In scientific research, evidence is often based on empirical data. Scholars tend to rely on students as participants in experiments in order to validate their thesis. They are an obvious choice when it comes to scientific research: They are usually willing to participate and are often themselves pursuing an education in the experiment’s domain. The software engineering domain is no exception. However, readers, authors, and reviewers do sometimes question the validity of experimental data that is gathered in controlled experiments from students. This is why we will address this difficult-to-answer question: Are students a proper substitute for experienced professional engineers while performing experiments in a typical software engineering experiment. As we demonstrate in this paper, it is not a “yes or no” answer. In some aspects, students were not outperformed by professionals, but in others, students would not only give different answers compared to professionals, but their answers would also diverge. In this paper we will show and analyze the results of a controlled experiment in the source code quality domain in terms of comparing student and professional responses. We will show that authors have to be careful when employing students in experiments, especially when complex and advanced domains are addressed. However, they may be a proper substitution in cases, where non-advanced aspects are required.


Author(s):  
Amanda Damasceno Santana ◽  
Eduardo Figueiredo

When a system evolution is not planned, developers can take decisions that degrade the system quality. To cope with this problem, refactoring can be applied to the source code aiming to increase code quality without modifying the software external behavior. To know when to refactor, the concept of bad smells can be used. Bad smells are snippets of source code that suggest the need of refactoring. However, bad smells does not always appear isolated. The aim of this study is to understand the impact of bad smell agglomerations on the software quality by evaluating a large dataset of open source systems. To achieve our goal, we plan to use data mining techniques complemented with correlation analysis of the dataset.


Sign in / Sign up

Export Citation Format

Share Document