Predicting Software Defect Density: A Case Study on Automated Static Code Analysis

Author(s):  
Artem Marchenko ◽  
Pekka Abrahamsson
Author(s):  
Natarajan Meghanathan ◽  
Alexander Roy Geoghegan

The high-level contribution of this book chapter is to illustrate how to conduct static code analysis of a software program and mitigate the vulnerabilities associated with the program. The automated tools used to test for software security are the Source Code Analyzer and Audit Workbench, developed by Fortify, Inc. The first two sections of the chapter are comprised of (i) An introduction to Static Code Analysis and its usefulness in testing for Software Security and (ii) An introduction to the Source Code Analyzer and the Audit Workbench tools and how to use them to conduct static code analysis. The authors then present a detailed case study of static code analysis conducted on a File Reader program (developed in Java) using these automated tools. The specific software vulnerabilities that are discovered, analyzed, and mitigated include: (i) Denial of Service, (ii) System Information Leak, (iii) Unreleased Resource (in the context of Streams), and (iv) Path Manipulation. The authors discuss the potential risk in having each of these vulnerabilities in a software program and provide the solutions (and the Java code) to mitigate these vulnerabilities. The proposed solutions for each of these four vulnerabilities are more generic and could be used to correct such vulnerabilities in software developed in any other programming language.


2020 ◽  
Vol 10 (21) ◽  
pp. 7800
Author(s):  
Andrew Walker ◽  
Dipta Das ◽  
Tomas Cerny

Microservice Architecture (MSA) is becoming the predominant direction of new cloud-based applications. There are many advantages to using microservices, but also downsides to using a more complex architecture than a typical monolithic enterprise application. Beyond the normal poor coding practices and code smells of a typical application, microservice-specific code smells are difficult to discover within a distributed application setup. There are many static code analysis tools for monolithic applications, but tools to offer code-smell detection for microservice-based applications are lacking. This paper proposes a new approach to detect code smells in distributed applications based on microservices. We develop an MSANose tool to detect up to eleven different microservice specific code smells and share it as open-source. We demonstrate our tool through a case study on two robust benchmark microservice applications and verify its accuracy. Our results show that it is possible to detect code smells within microservice applications using bytecode and/or source code analysis throughout the development process or even before its deployment to production.


Author(s):  
Danilo Nikolic ◽  
Darko Stefanovic ◽  
Dusanka Dakic ◽  
Srdan Sladojevic ◽  
Sonja Ristic

Technologies ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 3
Author(s):  
Gábor Antal ◽  
Zoltán Tóth ◽  
Péter Hegedűs ◽  
Rudolf Ferenc

Bug prediction aims at finding source code elements in a software system that are likely to contain defects. Being aware of the most error-prone parts of the program, one can efficiently allocate the limited amount of testing and code review resources. Therefore, bug prediction can support software maintenance and evolution to a great extent. In this paper, we propose a function level JavaScript bug prediction model based on static source code metrics with the addition of a hybrid (static and dynamic) code analysis based metric of the number of incoming and outgoing function calls (HNII and HNOI). Our motivation for this is that JavaScript is a highly dynamic scripting language for which static code analysis might be very imprecise; therefore, using a purely static source code features for bug prediction might not be enough. Based on a study where we extracted 824 buggy and 1943 non-buggy functions from the publicly available BugsJS dataset for the ESLint JavaScript project, we can confirm the positive impact of hybrid code metrics on the prediction performance of the ML models. Depending on the ML algorithm, applied hyper-parameters, and target measures we consider, hybrid invocation metrics bring a 2–10% increase in model performances (i.e., precision, recall, F-measure). Interestingly, replacing static NOI and NII metrics with their hybrid counterparts HNOI and HNII in itself improves model performances; however, using them all together yields the best results.


Sign in / Sign up

Export Citation Format

Share Document