scholarly journals Evolutionary Cost-cognizant Test Case Selection and Prioritization for Object-Oriented Programs

Test case prioritization (TCP) is a software testing technique that finds an ideal ordering of test cases for regression testing, so that testers can obtain the maximum benefit of their test suite, even if the testing process is stop at some arbitrary point. The recent trend of software development uses OO paradigm. This paper proposed a cost-cognizant TCP approach for object-oriented software that uses path-based integration testing. Path-based integration testing will identify the possible execution path and extract these paths from the Java System Dependence Graph (JSDG) model of the source code using forward slicing technique. Afterward evolutionary algorithm (EA) was employed to prioritize test cases based on the severity detection per unit cost for each of the dependent faults. The proposed technique was known as Evolutionary Cost-Cognizant Regression Test Case Prioritization (ECRTP) and being implemented as regression testing approach for experiment.

2019 ◽  
Vol 8 (4) ◽  
pp. 8457-8462

Regression testing is a necessary maintenance activity in the software industry where modified software programs are revalidated to make sure that changes do not adversely affect their behavior. Test case prioritization (TCP) is one of the most effective methods in regression testing whereby test cases are rescheduled in an appropriate order for execution to increase test effectiveness in meeting some performance goals such as increasing the rate of fault detection. This paper explores efforts that have been carried out in relation to TCP frameworks. Through the review of related literature, ten existing frameworks were identified, classified and reviewed whereby two are Bayesian network-based, five are multi-objective, while the rest are varied in terms of aspects and purposes. Accordingly, this study analyzes those frameworks based on their proposed year, TCP factors, number of test cases used, evaluations metric and criteria as well as experimental subjects. The results showed that the stated frameworks are not integrated with nature-inspired algorithms as enhancing optimization techniques while several others were insufficiently evaluated according to stated evaluation criteria and metrics for the effective and practical testing process. There is also a scarcity of frameworks that focus on regression test efficiency. This study indicates the need for further research into the topic to enhance TCP frameworks that focus on several directions for practical considerations in this field such as evaluation issues, specific knowledge dependency, and objective deviation. At the end of this study, several future directions such as nature-inspired algorithms assistance are proposed, and a number of limitations are identified and highlighted.


Author(s):  
Dharmveer Kumar Yadav ◽  
Sandip Kumar Dutta

In the software maintenance activity, regression testing is performed for validing modified source code. Regression testing ensures that the modified code would not affect the earlier tested program. Due to a constraint of resources and time, regression testing is a time-consuming process and it is a very expensive activity. During the regression testing, a set of the test case and the existing test cases are reused. To minimize the cost of regression testing, the researchers proposed a test case prioritization based on clustering techniques. In recent years, research on regression testing has made significant progress for object-oriented software. The empirical results show the importance of K-mean clustering algorithm used to achieve an effective result. They found from experimental results that their proposed approach achieves the highest faults detected value than others.


2016 ◽  
Vol 2016 ◽  
pp. 1-20 ◽  
Author(s):  
S. Panda ◽  
D. Munjal ◽  
D. P. Mohapatra

Test case prioritization focuses on finding a suitable order of execution of the test cases in a test suite to meet some performance goals like detecting faults early. It is likely that some test cases execute the program parts that are more prone to errors and will detect more errors if executed early during the testing process. Finding an optimal order of execution for the selected regression test cases saves time and cost of retesting. This paper presents a static approach to prioritizing the test cases by computing the affected component coupling (ACC) of the affected parts of object-oriented programs. We construct a graph named affected slice graph (ASG) to represent these affected program parts. We determine the fault-proneness of the nodes of ASG by computing their respective ACC values. We assign higher priority to those test cases that cover the nodes with higher ACC values. Our analysis with mutation faults shows that the test cases executing the fault-prone program parts have a higher chance to reveal faults earlier than other test cases in the test suite. The result obtained from seven case studies justifies that our approach is feasible and gives acceptable performance in comparison to some existing techniques.


2018 ◽  
Vol 7 (4) ◽  
pp. 2184 ◽  
Author(s):  
Omdev Dahiya ◽  
Kamna Solanki

Regression testing is about running the entire test ensemble again to ensure that amendments do not negatively affect the system. A popular approach in regression testing is test case prioritization which reorders test cases in such a way that those with higher priorities are run earlier than those with lower priorities based on some criterion. Numerous researchers have worked on different aspects of prioritization approaches. This paper presents the result of the study conducted on different prioritization approaches showing mostly stressed areas by researchers and the areas where there is a future scope. For it, studies related to test case prioritization in regression testing from the year 2004 to 2018 are analyzed by dividing this time period into three slots of five years each. 36 studies were selected from 948 studies to answer the research questions framed for this study. The trends followed in TCP along with the approaches evolved are thus documented to find the current trends and future scope for the researchers to work upon.  


2018 ◽  
Vol 7 (4.1) ◽  
pp. 28
Author(s):  
Abdulkarim Bello ◽  
Abubakar Md Sultan ◽  
Abdul Azim Abdul Ghani ◽  
Hazura Zulzalil

Regression testing performed to provide confidence on the newly or updated software system is a resource consuming process. To ease this process, various techniques are developed. One such technique, test case prioritization, orders test cases with respect to the goals such that the most important test case in achieving those goals is scheduled earlier during the testing session. Among such performance goals, the rate of faults detections, measure how faults are detected quickly throughout the regression testing process. Improved dependency detection among faults provides faster feedback to the developers which gives chance to debug leading faults earlier in time. One other goal, the rate of fault severity detection, measure how fast severe fault can be detected in the testing process. Although, previous works address these issues but assumed that the costs of executing test cases and severities of detected faults are the same. However, costs of test and severities of faults varied. Furthermore, they did not consider incorporating evolution process such as applying genetic algorithms to their technique. In this work, we proposed an evolutionary cost-cognizant regression testing approach that prioritizes test case according to the rate of severity detection of test cases connected with dependent faults using genetic algorithms. The aim is to reveal more severe leading faults earlier using least cost in executing the test suite and to measure the efficacy of the technique using APFDc.  


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ali M. Alakeel

Program assertions have been recognized as a supporting tool during software development, testing, and maintenance. Therefore, software developers place assertions within their code in positions that are considered to be error prone or that have the potential to lead to a software crash or failure. Similar to any other software, programs with assertions must be maintained. Depending on the type of modification applied to the modified program, assertions also might have to undergo some modifications. New assertions may also be introduced in the new version of the program, while some assertions can be kept the same. This paper presents a novel approach for test case prioritization during regression testing of programs that have assertions using fuzzy logic. The main objective of this approach is to prioritize the test cases according to their estimated potential in violating a given program assertion. To develop the proposed approach, we utilize fuzzy logic techniques to estimate the effectiveness of a given test case in violating an assertion based on the history of the test cases in previous testing operations. We have conducted a case study in which the proposed approach is applied to various programs, and the results are promising compared to untreated and randomly ordered test cases.


Regression testing is performed to make conformity that any changes in software program do not disturb the existing characteristics of the software. As the software improves, the test case tends to grow in size that makes it very costly to be executed, and thus the test cases are needed to be prioritized to select the effective test cases for software testing. In this paper, a test case prioritization technique in regression testing is proposed using a novel optimization algorithm known as Taylor series-based Jaya Optimization Algorithm (Taylor-JOA), which is the integration of Taylor series in Jaya Optimization Algorithm (JOA). The optimal test cases are selected based on the fitness function, modelled depending on the constraints, namely fault detection and branch coverage. The experimentation of the proposed Taylor-JOA is performed with the consideration of the evaluation metrics, namely Average Percentage of Fault Detected (APFD) and the Average Percentage of Branch Coverage (APBC). The APFD and the APBC of the proposed Taylor-JOA is 0.995, and 0.9917, respectively, which is high as compared to the existing methods that show the effectiveness of the proposed Taylor-JOA in the task of test case prioritization


2013 ◽  
Vol 10 (1) ◽  
pp. 73-102 ◽  
Author(s):  
Lijun Mei ◽  
Yan Cai ◽  
Changjiang Jia ◽  
Bo Jiang ◽  
W.K. Chan

Many web services not only communicate through XML-based messages, but also may dynamically modify their behaviors by applying different interpretations on XML messages through updating the associated XML Schemas or XML-based interface specifications. Such artifacts are usually complex, allowing XML-based messages conforming to these specifications structurally complex. Testing should cost-effectively cover all scenarios. Test case prioritization is a dimension of regression testing that assures a program from unintended modifications by reordering the test cases within a test suite. However, many existing test case prioritization techniques for regression testing treat test cases of different complexity generically. In this paper, the authors exploit the insights on the structural similarity of XML-based artifacts between test cases in both static and dynamic dimensions, and propose a family of test case prioritization techniques that selects pairs of test case without replacement in turn. To the best of their knowledge, it is the first test case prioritization proposal that selects test case pairs for prioritization. The authors validate their techniques by a suite of benchmarks. The empirical results show that when incorporating all dimensions, some members of our technique family can be more effective than conventional coverage-based techniques.


2016 ◽  
Vol 2016 ◽  
pp. 1-19 ◽  
Author(s):  
Rongcun Wang ◽  
Shujuan Jiang ◽  
Deng Chen ◽  
Yanmei Zhang

Similarity-based test case prioritization algorithms have been applied to regression testing. The common characteristic of these algorithms is to reschedule the execution order of test cases according to the distances between pair-wise test cases. The distance information can be calculated by different similarity measures. Since the topologies vary with similarity measures, the distances between pair-wise test cases calculated by different similarity measures are different. Similarity measures could significantly influence the effectiveness of test case prioritization. Therefore, we empirically evaluate the effects of six similarity measures on two similarity-based test case prioritization algorithms. The obtained results are statistically analyzed to recommend the best combination of similarity-based prioritization algorithms and similarity measures. The experimental results, confirmed by a statistical analysis, indicate that Euclidean distance is more efficient in finding defects than other similarity measures. The combination of the global similarity-based prioritization algorithm and Euclidean distance could be a better choice. It generates not only higher fault detection effectiveness but also smaller standard deviation. The goal of this study is to provide practical guides for picking the appropriate combination of similarity-based test case prioritization techniques and similarity measures.


Regression testing is an important, but expensive, process that has a powerful impact on software quality. Unfortunately all the test cases, existing and newly added, cannot be re-executed due to insufficient resources. In this scenario, prioritization of test case helps in improving the efficacy of regression testing by arranging the test cases in such a way that the most beneficial (that has the potential to detect the more number of faults) are executed first. Previous work and existing prioritization techniques, though detects faults, but there is a need of improved techniques to enhance the process of regression testing by improving the fault detection rate. The new technique, proposed in this paper, gives improved result than the existing ones. The comparison of the effectiveness of the proposed approach is done with other prioritization and non-prioritization orderings. The result of the proposed approach shows higher average percentage of faults detected (APFD) values. Also, the performance is evaluated and it is observed that the capability of the proposed method outperforms other algorithms by enhancing the fault detection rate.


Sign in / Sign up

Export Citation Format

Share Document