scholarly journals Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3660
Author(s):  
Wladmir Araujo Chapetta ◽  
Jailton Santos das Neves ◽  
Raphael Carlos Santos Machado

Modern sensors deployed in most Industry 4.0 applications are intelligent, meaning that they present sophisticated behavior, usually due to embedded software, and network connectivity capabilities. For that reason, the task of calibrating an intelligent sensor currently involves more than measuring physical quantities. As the behavior of modern sensors depends on embedded software, comprehensive assessments of such sensors necessarily demands the analysis of their embedded software. On the other hand, interlaboratory comparisons are comparative analyses of a body of labs involved in such assessments. While interlaboratory comparison is a well-established practice in fields related to physical, chemical and biological sciences, it is a recent challenge for software assessment. Establishing quantitative metrics to compare the performance of software analysis and testing accredited labs is no trivial task. Software is intangible and its requirements accommodate some ambiguity, inconsistency or information loss. Besides, software testing and analysis are highly human-dependent activities. In the present work, we investigate whether performing interlaboratory comparisons for software assessment by using quantitative performance measurement is feasible. The proposal was to evaluate the competence in software code analysis activities of each lab by using two quantitative metrics (code coverage and mutation score). Our results demonstrate the feasibility of establishing quantitative comparisons among software analysis and testing accredited laboratories. One of these rounds was registered as formal proficiency testing in the database—the first registered proficiency testing focused on code analysis.

2021 ◽  
Vol 26 (2) ◽  
Author(s):  
Fabiano Pecorelli ◽  
Fabio Palomba ◽  
Andrea De Lucia

AbstractTesting represents a crucial activity to ensure software quality. Recent studies have shown that test-related factors (e.g., code coverage) can be reliable predictors of software code quality, as measured by post-release defects. While these studies provided initial compelling evidence on the relation between tests and post-release defects, they considered different test-related factors separately: as a consequence, there is still a lack of knowledge of whether these factors are still good predictors when considering all together. In this paper, we propose a comprehensive case study on how test-related factors relate to production code quality in Apache systems. We first investigated how the presence of tests relates to post-release defects; then, we analyzed the role played by the test-related factors previously shown as significantly related to post-release defects. The key findings of the study show that, when controlling for other metrics (e.g., size of the production class), test-related factors have a limited connection to post-release defects.


Author(s):  
Raphael Machado ◽  
Wilson Melo ◽  
Lucila Bento ◽  
Sergio Camara ◽  
Vinicius da Hora ◽  
...  

2017 ◽  
Vol 42 (8) ◽  
pp. 3503-3519 ◽  
Author(s):  
Muthusamy Boopathi ◽  
Ramalingam Sujatha ◽  
Chandran Senthil Kumar ◽  
Srinivasan Narasimman

Sign in / Sign up

Export Citation Format

Share Document