scholarly journals Speeding-up mutation testing via data compression and state infection

Author(s):  
Qianqian Zhu ◽  
Annibale Panichella ◽  
Andy Zaidman

Mutation testing is widely considered as a high-end test criterion due to the vast number of mutants it generates. Although many efforts have been made to reduce the computational cost of mutation testing, its scalability issue remains in practice. In this paper, we introduce a novel method to speed up mutation testing based on state infection information. In addition to filtering out uninfected test executions, we further select a subset of mutants and a subset of test cases to run leveraging data-compression techniques. In particular, we adopt Formal Concept Analysis (FCA) to group similar mutants together and then select test cases to cover these mutants. To evaluate our method, we conducted an experimental study on six open source Java projects. We used EvoSuite to automatically generate test cases and to collect mutation data. The initial results show that our method can reduce the execution time by 83.93% with only 0.257% loss in precision.

2016 ◽  
Author(s):  
Qianqian Zhu ◽  
Annibale Panichella ◽  
Andy Zaidman

Mutation testing is widely considered as a high-end test criterion due to the vast number of mutants it generates. Although many efforts have been made to reduce the computational cost of mutation testing, its scalability issue remains in practice. In this paper, we introduce a novel method to speed up mutation testing based on state infection information. In addition to filtering out uninfected test executions, we further select a subset of mutants and a subset of test cases to run leveraging data-compression techniques. In particular, we adopt Formal Concept Analysis (FCA) to group similar mutants together and then select test cases to cover these mutants. To evaluate our method, we conducted an experimental study on six open source Java projects. We used EvoSuite to automatically generate test cases and to collect mutation data. The initial results show that our method can reduce the execution time by 83.93% with only 0.257% loss in precision.


2020 ◽  
Vol 20 (5) ◽  
pp. 799-814
Author(s):  
RICHARD TAUPE ◽  
ANTONIUS WEINZIERL ◽  
GERHARD FRIEDRICH

AbstractGeneralising and re-using knowledge learned while solving one problem instance has been neglected by state-of-the-art answer set solvers. We suggest a new approach that generalises learned nogoods for re-use to speed-up the solving of future problem instances. Our solution combines well-known ASP solving techniques with deductive logic-based machine learning. Solving performance can be improved by adding learned non-ground constraints to the original program. We demonstrate the effects of our method by means of realistic examples, showing that our approach requires low computational cost to learn constraints that yield significant performance benefits in our test cases. These benefits can be seen with ground-and-solve systems as well as lazy-grounding systems. However, ground-and-solve systems suffer from additional grounding overheads, induced by the additional constraints in some cases. By means of conflict minimization, non-minimal learned constraints can be reduced. This can result in significant reductions of grounding and solving efforts, as our experiments show.


2021 ◽  
Vol 179 (3) ◽  
pp. 295-319
Author(s):  
Longchun Wang ◽  
Lankun Guo ◽  
Qingguo Li

Formal Concept Analysis (FCA) has been proven to be an effective method of restructuring complete lattices and various algebraic domains. In this paper, the notion of contractive mappings over formal contexts is proposed, which can be viewed as a generalization of interior operators on sets into the framework of FCA. Then, by considering subset-selections consistent with contractive mappings, the notions of attribute continuous formal contexts and continuous concepts are introduced. It is shown that the set of continuous concepts of an attribute continuous formal context forms a continuous domain, and every continuous domain can be restructured in this way. Moreover, the notion of F-morphisms is identified to produce a category equivalent to that of continuous domains with Scott continuous functions. The paper also investigates the representations of various subclasses of continuous domains including algebraic domains and stably continuous semilattices.


Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


2013 ◽  
Vol 760-762 ◽  
pp. 1708-1712
Author(s):  
Ying Fang Li ◽  
Ying Jiang Li ◽  
Yan Li ◽  
Yang Bo

At present, as the number of web services resources on the network drastically increased, how to quickly and efficiently find the needed services from publishing services has become a problem to resolve. Aiming at the problems of low efficiency in service discovery of traditional web service, the formal concept analysis ( FCA) is introduced into the semantic Web service matching, and a Matching Algorithm based semantic web service is proposed. With considering the concept of limited inheritance,this method introduces the concept of limited inheritance to the semantic similarity calculation based on the concept lattice. It is significant in enhancing the service function matching in practical applications through adjust the calculation.


2007 ◽  
Vol 158 (23) ◽  
pp. 2627-2640 ◽  
Author(s):  
Ming-Wen Shao ◽  
Min Liu ◽  
Wen-Xiu Zhang

Sign in / Sign up

Export Citation Format

Share Document