scholarly journals PRIORITIZATION OF COMBINATORIAL TEST CASES BY INCREMENTAL INTERACTION COVERAGE

Author(s):  
RUBING HUANG ◽  
XIAODONG XIE ◽  
DAVE TOWEY ◽  
TSONG YUEH CHEN ◽  
YANSHENG LU ◽  
...  

Combinatorial interaction testing is a well-recognized testing method, and has been widely applied in practice, often with the assumption that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, an alternative assumption may be that some test cases are more likely to reveal failure, thus making the order of executing the test cases critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number of uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases.

2019 ◽  
Vol 10 (2) ◽  
pp. 1-26 ◽  
Author(s):  
Munish Khanna ◽  
Naresh Chauhan ◽  
Dilip Kumar Sharma

Regression testing of evolving software is a critical constituent of the software development process. Due to resources constraints, test case prioritization is one of the strategies followed in regression testing during which a test case that satisfies predefined objectives the most, as the tester perceives, would be executed the earliest. In this study, all the experiments were performed on three web applications consisting of 65 to 100 pages with lines of code ranging from 5000 to 7000. Various state-of-the-art approaches such as, heuristic approaches, Greedy approaches, and meta heuristic approaches were applied so as to identify the prioritized test sequence which maximizes the value of average percentage of fault detection. Performance of these algorithms was compared using different parameters and it was concluded that the Artificial Bee Colony algorithm performs better than all. Two novel greedy algorithms are also proposed in the study, of which the goal is to smartly manage the state of a tie, where a tie exhibits the condition that all the test cases participating in the tie are of equal significance in achieving the objective. It has also been validated that the performance of these novel proposed algorithm(s) is better than that of traditionally followed greedy approach, most of the time.


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Rubing Huang ◽  
Jinfu Chen ◽  
Yansheng Lu

Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure‐detection capability and has been widely applied in different scenarios, such as numerical programs, some object‐oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART‐CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART‐CID generally performs better than RT, with respect to different evaluation metrics.


Author(s):  
Michael Omari ◽  
Jinfu Chen ◽  
Robert French-Baidoo ◽  
Yunting Sun

Fixed Sized Candidate Set (FSCS) is the first of a series of methods proposed to enhance the effectiveness of random testing (RT) referred to as Adaptive Random Testing methods or ARTs. Since its inception, test case generation overheads have been a major drawback to the success of ART. In FSCS, the bulk of this cost is embedded in distance computations between a set of randomly generated candidate test cases and previously executed but unsuccessful test cases. Consequently, FSCS is caught in a logical trap of probing the distances between every candidate and all executed test cases before the best candidate is determined. Using data mining, however, we discovered that about 50% of all valid test cases are encountered much earlier in the distance computations process but without any benefit of a hindsight, FSCS is unable to validate them; a wild goose chase. This paper then uses this information to propose a new strategy that predictively and proactively selects valid candidates anywhere during the distance computation process without vetting every candidate. Theoretical analysis, simulations and experimental studies conducted led to a similar conclusion: 25% of the distance computations are wasteful and can be discarded without any repercussion on effectiveness.


2020 ◽  
Author(s):  
Rubing Huang ◽  
Haibo Chen ◽  
Yunan Zhou ◽  
Tsong Yueh Chen ◽  
Dave Towey ◽  
...  

Abstract Combinatorial interaction testing (CIT) aims at constructing a covering array (CA) of all value combinations at a specific interaction strength, to detect faults that are caused by the interaction of parameters. CIT has been widely used in different applications, with many algorithms and tools having been proposed to support CA construction. To date, however, there appears to have been no studies comparing different CA constructors when only some of the CA test cases are executed. In this paper, we present an investigation of five popular CA constructors: ACTS, Jenny, PICT, CASA and TCA. We conducted empirical studies examining the five programs, focusing on interaction coverage and fault detection. The experimental results show that when there is no preference or special justification for using other CA constructors, then Jenny is recommended—because it achieves better interaction coverage and fault detection than the other four constructors in many cases. Our results also show that when using ACTS or CASA, their CAs must be prioritized before testing. The main reason for this is that these CAs can result in considerable interaction coverage or fault detection capabilities when executing a large number of test cases; however, they may also produce the lowest rates of fault detection and interaction coverage.


2019 ◽  
Vol 10 (3) ◽  
pp. 19-37
Author(s):  
Ram Gouda ◽  
Chandraprakash V.

The combinatorial strategy is useful in the reduction of the number of input parameters into a compact set of a system based on the combinations of the parameters. This strategy can be used in testing the behaviour that takes place when the events are allowed to be executed in an appropriate order. Basically, in the software systems, for the highly configurable system, the input configurations are based on the constraints, and the construction of this idea undergoes various kinds of difficulties. The proposed Jaya-Bat optimization algorithm is developed with the combinatorial interaction test cases in an effective manner in the presence of the constraints. The proposed Jaya-Bat based optimization algorithm is the integration of the Jaya optimization algorithm (JOA) and the Bat optimization algorithm (BA). The experimentation is carried out in terms of average size and the average time to prove the effectiveness of the proposed algorithm. From the results, it is clear that the proposed algorithm is capable of selecting the test cases optimally with better performance.


Author(s):  
Harish Kumar ◽  
Vedpal ◽  
Naresh Chauhan

Many constraints are imposed on the software industries that want to complete the project in time and within budget. It is a challenging task for software industries to complete the project with in time and budget due to advancement of technology. In this paper a hierarchical test case prioritization (HSTCP) (Kumar, 2013) tool is presented which implements hierarchical system test case prioritization technique. The presented tool helps to prioritize the system test case and reduce the testing cost of the software. The HSTCP tool works at three levels. At the first level it prioritizes the requirements, at the seconds level the modules of highest prioritized requirements are prioritized. Finally at third level it prioritizes the test cases of the modules in prioritized order. The presented HSTCP tool is implemented in Java language.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

Software Product Lines(SPLs) covers a mixture of features for testing Software Application Program(SPA). Testing cost reduction is a major metric of software testing. In combinatorial testing(CT), maximization of fault type coverage and test suite reduction plays a key role to reduce the testing cost of SPA. Metaheuristic Genetic Algorithm(GA) do not offer best outcome for test suite optimization problem due to mutation operation and required more computational time. So, Fault-Type Coverage Based Ant Colony Optimization(FTCBACO) algorithm is offered for test suite reduction in CT. FTCBACO algorithm starts with test cases in test suite and assign separate ant to each test case. Ants elect best test cases by updating of pheromone trails and selection of higher probability trails. Best test case path of ant with least time are taken as optimal solution for performing CT. Hence, FTCBACO Technique enriches reduction rate of test suite and minimizes computational time of reducing test cases efficiently for CT.


2018 ◽  
Vol 7 (4.1) ◽  
pp. 28
Author(s):  
Abdulkarim Bello ◽  
Abubakar Md Sultan ◽  
Abdul Azim Abdul Ghani ◽  
Hazura Zulzalil

Regression testing performed to provide confidence on the newly or updated software system is a resource consuming process. To ease this process, various techniques are developed. One such technique, test case prioritization, orders test cases with respect to the goals such that the most important test case in achieving those goals is scheduled earlier during the testing session. Among such performance goals, the rate of faults detections, measure how faults are detected quickly throughout the regression testing process. Improved dependency detection among faults provides faster feedback to the developers which gives chance to debug leading faults earlier in time. One other goal, the rate of fault severity detection, measure how fast severe fault can be detected in the testing process. Although, previous works address these issues but assumed that the costs of executing test cases and severities of detected faults are the same. However, costs of test and severities of faults varied. Furthermore, they did not consider incorporating evolution process such as applying genetic algorithms to their technique. In this work, we proposed an evolutionary cost-cognizant regression testing approach that prioritizes test case according to the rate of severity detection of test cases connected with dependent faults using genetic algorithms. The aim is to reveal more severe leading faults earlier using least cost in executing the test suite and to measure the efficacy of the technique using APFDc.  


2018 ◽  
Vol 7 (4.5) ◽  
pp. 242
Author(s):  
John Bruce. E ◽  
T Sasi Prabha

 Regression testing is testing the software with the intention to confirm that changes made on part of a module do not necessitate other parts of the module. Test case prioritization helps to reduce regression testing cost by ordering the test cases in such a way that it produces optimized results. Code Coverage and Fault detection being the factors behind the prioritization is dealt with techniques like Heuristic method, Meta Heuristic methods and Data mining techniques. The effectiveness of the techniques applied can be evaluated with the metrics like Average Percentage of Fault Detection (APFD) , Average Percentage Block Coverage (APBC), Average Percentage Decision Coverage (APDC) etc . In this paper,, a detailed survey on the various techniques adopted for the prioritization of test cases are presented. 


Sign in / Sign up

Export Citation Format

Share Document