scholarly journals On the correlation between testing effort and software complexity metrics

Author(s):  
Adnan Muslija ◽  
Eduard P Enoiu

Software complexity metrics, such as code size and cyclomatic complexity, have been used in the software engineering community for predicting quality metrics such as maintainability, bug proneness and robustness. However, not many studies have addressed the relationship between complexity metrics and software testing and there is little experimental evidence to support the use of these code metrics in the estimation of test effort. We have investigated and evaluated the relationship between test effort (i.e, number of test cases and test execution time) and software complexity metrics for industrial control software used in an embedded system. We show how to measure different software complexity metrics such as number of elements, cyclomatic complexity, and information flow for a popular programming language named FBD used in the safety critical domain. In addition, we use test data and test suites created by experienced test engineers working at Bombardier Transportation Sweden AB to evaluate the correlation between several complexity measures and the testing effort. We found that there is a moderate correlation between software complexity metrics and test effort. In addition, the results show that the software size (i.e., number of elements in the FBD program) provides the highest correlation level with the number of test cases created and test execution time. Our results suggest that software size and structure metrics, while useful for identifying parts of the system that are more complicated, should not be solely used for identifying parts of the system for which test engineers might need to create more test cases. A potential explanation of this result concerns the nature of testing, since other attributes such as the level of thorough testing required and the size of the specifications can influence the creation of test cases. In addition, we used a linear regression model to estimate the test effort using the software complexity measurement results.

2018 ◽  
Author(s):  
Adnan Muslija ◽  
Eduard P Enoiu

Software complexity metrics, such as code size and cyclomatic complexity, have been used in the software engineering community for predicting quality metrics such as maintainability, bug proneness and robustness. However, not many studies have addressed the relationship between complexity metrics and software testing and there is little experimental evidence to support the use of these code metrics in the estimation of test effort. We have investigated and evaluated the relationship between test effort (i.e, number of test cases and test execution time) and software complexity metrics for industrial control software used in an embedded system. We show how to measure different software complexity metrics such as number of elements, cyclomatic complexity, and information flow for a popular programming language named FBD used in the safety critical domain. In addition, we use test data and test suites created by experienced test engineers working at Bombardier Transportation Sweden AB to evaluate the correlation between several complexity measures and the testing effort. We found that there is a moderate correlation between software complexity metrics and test effort. In addition, the results show that the software size (i.e., number of elements in the FBD program) provides the highest correlation level with the number of test cases created and test execution time. Our results suggest that software size and structure metrics, while useful for identifying parts of the system that are more complicated, should not be solely used for identifying parts of the system for which test engineers might need to create more test cases. A potential explanation of this result concerns the nature of testing, since other attributes such as the level of thorough testing required and the size of the specifications can influence the creation of test cases. In addition, we used a linear regression model to estimate the test effort using the software complexity measurement results.


2020 ◽  
Vol 13 (4) ◽  
pp. 572-578 ◽  
Author(s):  
Mamdouh Alenezi ◽  
Mohammad Zarour ◽  
Mohammed Akour

Background: Software complexity affects its quality; a complex software is not only difficult to read, maintain and less efficient, but it also can be less secure with many vulnerabilities. Complexity metrics, e.g. cyclomatic complexity and nesting levels, are commonly used to predict and benchmark software cost and efficiency. Complexity metrics are also used to decide if code refactoring is needed. Objective: Software systems with high complexity need more time to develop and test and may lead to bad understandability and more errors. Nesting level in the target structure may result in developing more complex software in what is so-called the nesting problem. Nesting problem should be shortened by rewriting the code or breaking into several functional procedures. Method: In this paper, the relationship between the nesting levels, the cyclomatic complexity, and lines of code (LOC) metrics are measured through several software releases. In order to address how strong a relationship between these factors with the nesting level, correlation coefficients are calculated. Moreover, to examine to what extent the developers are aware of and tackle the nesting problem, the evolution of nesting levels for ten releases of five open sources systems is studied to see if it is improving over successive versions or not. Results: The result shows that the nesting level has variant effects on the cyclomatic complexity and SLOC for the five studied systems. Conclusion: nesting level has the tendency to have a positive correlation with other factors (cyclomatic complexity and LOC).


2020 ◽  
Vol 3 (2) ◽  
Author(s):  
Ani - Rahmani

Software testing (testing) is a crucial stage in software development. The success of the testing process will ensure the quality of the software. In the regression testing process, one issue is that not all test cases (retest all) in the test suite need to be executed. Retest all will consume massive resources, as well as a long time. Regression testing techniques seek to find ways to reduce test execution time. One of the regression testing techniques is test case selection, also known as regression test selection (RTS). This paper describes a study on babelRTS, an RTS algorithm, to see its effectiveness. Effectiveness is measured by comparing the execution time of the execution retest all and babelRTS. Experiments were carried out on five software under tests (SUT) that had some faults. Test cases are prepared by designing for each SUT. The results showed a reduction in time so that the effectiveness reached a maximum of 32%, and average of 23% .


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Mourad Badri ◽  
Fadel Toure

The aim of this paper is to evaluate empirically the relationship between a new metric (Quality Assurance Indicator—Qi) and testability of classes in object-oriented systems. The Qi metric captures the distribution of the control flow in a system. We addressed testability from the perspective of unit testing effort. We collected data from five open source Java software systems for which JUnit test cases exist. To capture the testing effort of classes, we used different metrics to quantify the corresponding JUnit test cases. Classes were classified, according to the required testing effort, in two categories: high and low. In order to evaluate the capability of the Qi metric to predict testability of classes, we used the univariate logistic regression method. The performance of the predicted model was evaluated using Receiver Operating Characteristic (ROC) analysis. The results indicate that the univariate model based on the Qi metric is able to accurately predict the unit testing effort of classes.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 146
Author(s):  
Lakshmi Prasad Mudarakola ◽  
J K.R. Sastry ◽  
V Chandra Prakash

Thorough testing of embedded systems is required especially when the systems are related to monitoring and controlling the mission critical and safety critical systems. The embedded systems must be tested comprehensively which include testing hardware, software and both together. Embedded systems are highly intelligent devices that are infiltrating our daily lives such as the mobile in your pocket, and wireless infrastructure behind it, routers, home theatre system, the air traffic control station etc. Software now makes up 90% of the value of these devices. In this paper, authors present different methods to test an embedded system using test cases generated through combinatorial techniques. The experimental results for testing a TMCNRS (Temperature Monitoring and Controlling Nuclear Reactor System) using test cases generated from combinatorial methods are also shown.


2010 ◽  
Vol 7 (4) ◽  
pp. 769-787 ◽  
Author(s):  
Robertas Damasevicius ◽  
Vytautas Stuikys

The concept of complexity is used in many areas of computer science and software engineering. Software complexity metrics can be used to evaluate and compare quality of software development and maintenance processes and their products. Complexity management and measurement is especially important in novel programming technologies and paradigms, such as aspect-oriented programming, generative programming, and metaprogramming, where complex multilanguage and multi-aspect program specifications are developed and used. This paper analyzes complexity management and measurement techniques, and proposes five complexity metrics (Relative Kolmogorov Complexity, Metalanguage Richness, Cyclomatic Complexity, Normalized Difficulty, Cognitive Difficulty) for measuring complexity of metaprograms at information, metalanguage, graph, algorithm, and cognitive dimensions.


2021 ◽  
Vol 12 (1) ◽  
pp. 111-130
Author(s):  
Ankita Bansal ◽  
Abha Jain ◽  
Abhijeet Anand ◽  
Swatantra Annk

Huge and reputed software industries are expected to deliver quality products. However, industry suffers from a loss of approximately $500 billion due to shoddy software quality. The quality of the product in terms of its accuracy, efficiency, and reliability can be revamped through testing by focusing attention on testing the product through effective test case generation and prioritization. The authors have proposed a test-case generation technique based on iterative listener genetic algorithm that generates test cases automatically. The proposed technique uses its adaptive nature and solves the issues like redundant test cases, inefficient test coverage percentage, high execution time, and increased computation complexity by maintaining the diversity of the population which will decrease the redundancy in test cases. The performance of the technique is compared with four existing test-case generation algorithms in terms of computational complexity, execution time, coverage, and it is observed that the proposed technique outperformed.


Author(s):  
Salma Azzouzi ◽  
Sara Hsaini ◽  
My El Hassan Charaf

Conformance testing may be seen as mean to execute an IUT (implementation under test) by carrying out test cases in order to observe whether the behavior of the IUT is conforming to its specifications. However, the development of distributed testing frameworks is more complex and the implementation of the parallel testing components (PTCs) should take into consideration the mechanisms and functions required to support interaction during PTC communication. In this article, the authors present another way to control the test execution of PTCs by introducing synchronization messages into the local test sequences. Then, they suggest an agent-based simulation to implement synchronized local test sequences and resolve the problem of control and synchronization.


Sign in / Sign up

Export Citation Format

Share Document