RESTRICTED RANDOM TESTING: ADAPTIVE RANDOM TESTING BY EXCLUSION

Author(s):  
KWOK PING CHAN ◽  
TSONG YUEH CHEN ◽  
DAVE TOWEY

Restricted Random Testing (RRT) is a new method of testing software that improves upon traditional Random Testing (RT) techniques. Research has indicated that failure patterns (portions of an input domain which, when executed, cause the program to fail or reveal an error) can influence the effectiveness of testing strategies. For certain types of failure patterns, it has been found that a widespread and even distribution of test cases in the input domain can be significantly more effective at detecting failure compared with ordinary RT. Testing methods based on RT, but which aim to achieve even and widespread distributions, have been called Adaptive Random Testing (ART) strategies. One implementation of ART is RRT. RRT uses exclusion zones around executed, but non-failure-causing, test cases to restrict the regions of the input domain from which subsequent test cases may be drawn. In this paper, we introduce the motivation behind RRT, explain the algorithm and detail some empirical analyses carried out to examine the effectiveness of the method. Two versions of RRT are presented: Ordinary RRT (ORRT) and Normalized RRT (NRRT). The two versions share the same fundamental algorithm, but differ in their treatment of non-homogeneous input domains. Investigations into the use of alternative exclusion shapes are outlined, and a simple technique for reducing the computational overheads of RRT, prompted by the alternative exclusion shape investigations, is also explained. The performance of RRT is compared with RT and another ART method based on maximized minimum test case separation (DART), showing excellent improvement over RT and a very favorable comparison with DART.

2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Rubing Huang ◽  
Jinfu Chen ◽  
Yansheng Lu

Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure‐detection capability and has been widely applied in different scenarios, such as numerical programs, some object‐oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART‐CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART‐CID generally performs better than RT, with respect to different evaluation metrics.


Author(s):  
Michael Omari ◽  
Jinfu Chen ◽  
Robert French-Baidoo ◽  
Yunting Sun

Fixed Sized Candidate Set (FSCS) is the first of a series of methods proposed to enhance the effectiveness of random testing (RT) referred to as Adaptive Random Testing methods or ARTs. Since its inception, test case generation overheads have been a major drawback to the success of ART. In FSCS, the bulk of this cost is embedded in distance computations between a set of randomly generated candidate test cases and previously executed but unsuccessful test cases. Consequently, FSCS is caught in a logical trap of probing the distances between every candidate and all executed test cases before the best candidate is determined. Using data mining, however, we discovered that about 50% of all valid test cases are encountered much earlier in the distance computations process but without any benefit of a hindsight, FSCS is unable to validate them; a wild goose chase. This paper then uses this information to propose a new strategy that predictively and proactively selects valid candidates anywhere during the distance computation process without vetting every candidate. Theoretical analysis, simulations and experimental studies conducted led to a similar conclusion: 25% of the distance computations are wasteful and can be discarded without any repercussion on effectiveness.


2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Zhibo Li ◽  
Qingbao Li ◽  
Lei Yu

Random testing (RT) is widely applied in the area of software testing due to its advantages such as simplicity, unbiasedness, and easy implementation. Adaptive random testing (ART) enhances RT. It improves the effectiveness of RT by distributing test cases as evenly as possible. Fixed Size Candidate Set (FSCS) is one of the most well-known ART algorithms. Its high failure-detection effectiveness only shows at low failure rates in low-dimensional spaces. In order to solve this problem, the boundary effect of the test case distribution is analyzed, and the FSCS algorithm of a limited candidate set (LCS-FSCS) is proposed. By utilizing the information gathered from success test cases (no failure-causing test inputs), a tabu generation domain of candidate test case is produced. This tabu generation domain is eliminated from the current candidate test case generation domain. Finally, the number of test cases at the boundary is reduced by constraining the candidate test case generation domain. The boundary effect is effectively relieved, and the distribution of test cases is more even. The results of the simulation experiment show that the failure-detection effectiveness of LCS-FSCS is significantly improved in high-dimensional spaces. Meanwhile, the failure-detection effectiveness is also improved for high failure rates and the gap of failure-detection effectiveness between different failure rates is narrowed. The results of an experiment conducted on some real-life programs show that LCS-FSCS is less effective than FSCS only when the failure distribution is concentrated on the boundary. In general, the effectiveness of LCS-FSCS is higher than that of FSCS.


2008 ◽  
Vol 81 (12) ◽  
pp. 2146-2162 ◽  
Author(s):  
Tsong Yueh Chen ◽  
Fei-Ching Kuo ◽  
Huai Liu

Author(s):  
Everton Note Narciso ◽  
Márcio Eduardo Delamaro ◽  
Fátima De Lourdes Dos Santos Nunes

Time and resource constraints should be taken into account in software testing activities, and thus optimizing the test suite is fundamental in the development process. In this context, the test case selection aims to eliminate redundant or unnecessary test data, which is crucial for the definition of test strategies. This paper presents a systematic review on the test case selection conducted through a selection of 449 articles published in leading journals and conferences in Computer Science. We addressed the state-of-art by collecting and comparing existing evidence on the methods used in the different software domains and the methods used to evaluate the test case selection. Our study identified 32 papers that met the research objectives, which featured 18 different selection methods and were evaluated through 71 case studies. The most commonly reported methods are adaptive random testing, genetic algorithms and greedy algorithm. Most approaches rely on heuristics, such as diversity of test cases and code or model coverage. This paper also discusses the key concepts and approaches, areas of application and evaluation metrics inherent to the methods of test case selection available in the literature.


The software testing is considered as the most powerful and important phase. Effective testing process will leads to more accurate and reliable results and high quality software products. Random testing (RT) is a major software testing strategy and their effortlessness makes them conceivable as the most efficient testing strategies concerning the time required for experiment determination, its significant drawback of RT is defect detection efficacy. This draw back has been beat by Adaptive Testing (AT), however AT is enclosed of computational complexity. One most important method for improving RT is Adaptive random testing (ART). Another class of testing strategies is partition testing is one of the standard software program checking out strategies, which involves dividing the enter domain up into a set number of disjoint partitions, and selecting take a look at cases from inside every partition The hybrid approach is a combination of AT and RPT that is already existing called as ARPT strategy. In ARPT the random partitioning is improved by introducing different clustering algorithms solves the parameter space of problem between the target method and objective function of the test data. In this way random partitioning is improved to reduce the time conception and complexity in ARPT testing strategies. The parameters of enhanced ARPT testing approaches are optimized by utilizing different optimization algorithms. The computational complexity of Optimized Improved ARPT (OIARPT) testing strategies is reduced by selecting the best test cases using Support Vector Machine (SVM). In this paper the testing strategies of Optimized Improved ARPT with SVM are unified and named as Unified ARPT (UARPT) which enhances the testing performance and reduces the time complexity to test software.


2015 ◽  
Vol 22 (2) ◽  
pp. 316
Author(s):  
Maximiliano Cristiá ◽  
Joaquín Cuenca ◽  
Claudia Frydman

Model-based testing (MBT) studies how test cases are generated from a model of the system under test (SUT). Many MBT methods rely on building an automaton from the model and then they generate test cases by covering the automaton with different path coverage criteria. However, if a model of the SUT is a logical formula over some complex mathematical theories (such as set theory) it may be more natural or intuitive to apply coverage criteria directly over the formula. On the other hand, domain partition, i.e. the partition of the input domain of model operations, is one of the main techniques in MBT. Partitioning is conducted by applying different rules or heuristics. Engineers may find it difficult to decide what, where and how these rules should be applied. In this paper we propose a set of coverage criteria based on domain partition for set-based specifications. We call them testing strategies. Testing strategies play a similar role to path- or data-based coverage criteria in structural testing. Furthermore, we show a partial order of testing strategies as is done in structural testing. We also describe an implementation of testing strategies for the Test Template Framework, which is a MBT method for the Z notation; and a scripting language that allows users to implement testing strategies.


Author(s):  
Arbab Alamgir ◽  
Abu Khari A’ain ◽  
Norlina Paraman ◽  
Usman Ullah Sheikh

<p>Testing and verification of digital circuits is of vital importance in electronics industry. Moreover, key designs require preservation of their intellectual property that might restrict access to the internal structure of circuit under test. Random testing is a classical solution to black box testing as it generates test patterns without using the structural implementation of the circuit under test. However, random testing ignores the importance of previously applied test patterns while generating subsequent test patterns. An improvement to random testing is Antirandom that diversifies every subsequent test pattern in the test sequence. Whereas, computational intensive process of distance calculation restricts its scalability for large input circuit under test. Fixed sized candidate set adaptive random testing uses predetermined number of patterns for distance calculations to avoid computational complexity. A combination of max-min distance with previously executed patterns is carried out for each test pattern candidate. However, the reduction in computational complexity reduces the effectiveness of test set in terms of fault coverage. This paper uses a total cartesian distance based approach on fixed sized candidate set to enhance diversity in test sequence. The proposed approach has a two way effect on the test pattern generation as it lowers the computational intensity along with enhancement in the fault coverage. Fault simulation results on ISCAS’85 and ISCAS’89 benchmark circuits show that fault coverage of the proposed method increases up to 20.22% compared to previous method.</p>


Sign in / Sign up

Export Citation Format

Share Document