scholarly journals Regression Test Case Prioritization Frameworks: Challenges and Future Directions

2019 ◽  
Vol 8 (4) ◽  
pp. 8457-8462

Regression testing is a necessary maintenance activity in the software industry where modified software programs are revalidated to make sure that changes do not adversely affect their behavior. Test case prioritization (TCP) is one of the most effective methods in regression testing whereby test cases are rescheduled in an appropriate order for execution to increase test effectiveness in meeting some performance goals such as increasing the rate of fault detection. This paper explores efforts that have been carried out in relation to TCP frameworks. Through the review of related literature, ten existing frameworks were identified, classified and reviewed whereby two are Bayesian network-based, five are multi-objective, while the rest are varied in terms of aspects and purposes. Accordingly, this study analyzes those frameworks based on their proposed year, TCP factors, number of test cases used, evaluations metric and criteria as well as experimental subjects. The results showed that the stated frameworks are not integrated with nature-inspired algorithms as enhancing optimization techniques while several others were insufficiently evaluated according to stated evaluation criteria and metrics for the effective and practical testing process. There is also a scarcity of frameworks that focus on regression test efficiency. This study indicates the need for further research into the topic to enhance TCP frameworks that focus on several directions for practical considerations in this field such as evaluation issues, specific knowledge dependency, and objective deviation. At the end of this study, several future directions such as nature-inspired algorithms assistance are proposed, and a number of limitations are identified and highlighted.

Test case prioritization (TCP) is a software testing technique that finds an ideal ordering of test cases for regression testing, so that testers can obtain the maximum benefit of their test suite, even if the testing process is stop at some arbitrary point. The recent trend of software development uses OO paradigm. This paper proposed a cost-cognizant TCP approach for object-oriented software that uses path-based integration testing. Path-based integration testing will identify the possible execution path and extract these paths from the Java System Dependence Graph (JSDG) model of the source code using forward slicing technique. Afterward evolutionary algorithm (EA) was employed to prioritize test cases based on the severity detection per unit cost for each of the dependent faults. The proposed technique was known as Evolutionary Cost-Cognizant Regression Test Case Prioritization (ECRTP) and being implemented as regression testing approach for experiment.


2018 ◽  
Vol 7 (4) ◽  
pp. 2184 ◽  
Author(s):  
Omdev Dahiya ◽  
Kamna Solanki

Regression testing is about running the entire test ensemble again to ensure that amendments do not negatively affect the system. A popular approach in regression testing is test case prioritization which reorders test cases in such a way that those with higher priorities are run earlier than those with lower priorities based on some criterion. Numerous researchers have worked on different aspects of prioritization approaches. This paper presents the result of the study conducted on different prioritization approaches showing mostly stressed areas by researchers and the areas where there is a future scope. For it, studies related to test case prioritization in regression testing from the year 2004 to 2018 are analyzed by dividing this time period into three slots of five years each. 36 studies were selected from 948 studies to answer the research questions framed for this study. The trends followed in TCP along with the approaches evolved are thus documented to find the current trends and future scope for the researchers to work upon.  


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ali M. Alakeel

Program assertions have been recognized as a supporting tool during software development, testing, and maintenance. Therefore, software developers place assertions within their code in positions that are considered to be error prone or that have the potential to lead to a software crash or failure. Similar to any other software, programs with assertions must be maintained. Depending on the type of modification applied to the modified program, assertions also might have to undergo some modifications. New assertions may also be introduced in the new version of the program, while some assertions can be kept the same. This paper presents a novel approach for test case prioritization during regression testing of programs that have assertions using fuzzy logic. The main objective of this approach is to prioritize the test cases according to their estimated potential in violating a given program assertion. To develop the proposed approach, we utilize fuzzy logic techniques to estimate the effectiveness of a given test case in violating an assertion based on the history of the test cases in previous testing operations. We have conducted a case study in which the proposed approach is applied to various programs, and the results are promising compared to untreated and randomly ordered test cases.


Regression testing is performed to make conformity that any changes in software program do not disturb the existing characteristics of the software. As the software improves, the test case tends to grow in size that makes it very costly to be executed, and thus the test cases are needed to be prioritized to select the effective test cases for software testing. In this paper, a test case prioritization technique in regression testing is proposed using a novel optimization algorithm known as Taylor series-based Jaya Optimization Algorithm (Taylor-JOA), which is the integration of Taylor series in Jaya Optimization Algorithm (JOA). The optimal test cases are selected based on the fitness function, modelled depending on the constraints, namely fault detection and branch coverage. The experimentation of the proposed Taylor-JOA is performed with the consideration of the evaluation metrics, namely Average Percentage of Fault Detected (APFD) and the Average Percentage of Branch Coverage (APBC). The APFD and the APBC of the proposed Taylor-JOA is 0.995, and 0.9917, respectively, which is high as compared to the existing methods that show the effectiveness of the proposed Taylor-JOA in the task of test case prioritization


2013 ◽  
Vol 10 (1) ◽  
pp. 73-102 ◽  
Author(s):  
Lijun Mei ◽  
Yan Cai ◽  
Changjiang Jia ◽  
Bo Jiang ◽  
W.K. Chan

Many web services not only communicate through XML-based messages, but also may dynamically modify their behaviors by applying different interpretations on XML messages through updating the associated XML Schemas or XML-based interface specifications. Such artifacts are usually complex, allowing XML-based messages conforming to these specifications structurally complex. Testing should cost-effectively cover all scenarios. Test case prioritization is a dimension of regression testing that assures a program from unintended modifications by reordering the test cases within a test suite. However, many existing test case prioritization techniques for regression testing treat test cases of different complexity generically. In this paper, the authors exploit the insights on the structural similarity of XML-based artifacts between test cases in both static and dynamic dimensions, and propose a family of test case prioritization techniques that selects pairs of test case without replacement in turn. To the best of their knowledge, it is the first test case prioritization proposal that selects test case pairs for prioritization. The authors validate their techniques by a suite of benchmarks. The empirical results show that when incorporating all dimensions, some members of our technique family can be more effective than conventional coverage-based techniques.


2016 ◽  
Vol 2016 ◽  
pp. 1-19 ◽  
Author(s):  
Rongcun Wang ◽  
Shujuan Jiang ◽  
Deng Chen ◽  
Yanmei Zhang

Similarity-based test case prioritization algorithms have been applied to regression testing. The common characteristic of these algorithms is to reschedule the execution order of test cases according to the distances between pair-wise test cases. The distance information can be calculated by different similarity measures. Since the topologies vary with similarity measures, the distances between pair-wise test cases calculated by different similarity measures are different. Similarity measures could significantly influence the effectiveness of test case prioritization. Therefore, we empirically evaluate the effects of six similarity measures on two similarity-based test case prioritization algorithms. The obtained results are statistically analyzed to recommend the best combination of similarity-based prioritization algorithms and similarity measures. The experimental results, confirmed by a statistical analysis, indicate that Euclidean distance is more efficient in finding defects than other similarity measures. The combination of the global similarity-based prioritization algorithm and Euclidean distance could be a better choice. It generates not only higher fault detection effectiveness but also smaller standard deviation. The goal of this study is to provide practical guides for picking the appropriate combination of similarity-based test case prioritization techniques and similarity measures.


Regression testing is an important, but expensive, process that has a powerful impact on software quality. Unfortunately all the test cases, existing and newly added, cannot be re-executed due to insufficient resources. In this scenario, prioritization of test case helps in improving the efficacy of regression testing by arranging the test cases in such a way that the most beneficial (that has the potential to detect the more number of faults) are executed first. Previous work and existing prioritization techniques, though detects faults, but there is a need of improved techniques to enhance the process of regression testing by improving the fault detection rate. The new technique, proposed in this paper, gives improved result than the existing ones. The comparison of the effectiveness of the proposed approach is done with other prioritization and non-prioritization orderings. The result of the proposed approach shows higher average percentage of faults detected (APFD) values. Also, the performance is evaluated and it is observed that the capability of the proposed method outperforms other algorithms by enhancing the fault detection rate.


2021 ◽  
Vol 27 (2) ◽  
pp. 170-189
Author(s):  
P. K. Gupta

Software is an integration of numerous programming modules  (e.g., functions, procedures, legacy system, reusable components, etc.) tested and combined to build the entire module. However, some undesired faults may occur due to a change in modules while performing validation and verification. Retesting of entire software is a costly affair in terms of money and time. Therefore, to avoid retesting of entire software, regression testing is performed. In regression testing, an earlier created test suite is used to retest the software system's modified module. Regression Testing works in three manners; minimizing test cases, selecting test cases, and prioritizing test cases. In this paper, a two-phase algorithm has been proposed that considers test case selection and test case prioritization technique for performing regression testing on several modules ranging from a smaller line of codes to huge line codes of procedural language. A textual based differencing algorithm has been implemented for test case selection. Program statements modified between two modules are used for textual differencing and utilized to identify test cases that affect modified program statements. In the next step, test case prioritization is implemented by applying the Genetic Algorithm for code/condition coverage. Genetic operators: Crossover and Mutation have been applied over the initial population (i.e. test cases), taking code/condition coverage as fitness criterion to provide a prioritized test suite. Prioritization algorithm can be applied over both original and reduced test suite depending upon the test suite's size or the need for accuracy. In the obtained results, the efficiency of the prioritization algorithms has been analyzed by the Average Percentage of Code Coverage (APCC) and Average Percentage of Code Coverage with cost (APCCc). A comparison of the proposed approach is also done with the previously proposed methods and it is observed that APCC & APCCc values achieve higher percentage values faster in the case of the prioritized test suite in contrast to the non-prioritized test suite.


Author(s):  
Akihiro Hori ◽  
Shingo Takada ◽  
Toshiyuki Kurabayashi ◽  
Haruto Tanno

Much work has been done on automating regression testing for applications. But most of them focus on test execution. Little work has been done on automatically determining if a test case passes or fails. This decision is often made by comparing the results of executing test cases on a base version of the application and post-modification version of the application. If the two results match, the test case passes, otherwise fails. However, to the best of our knowledge, there is no regression testing method for automatically deciding pass/fail of dynamic Web applications which use JavaScript or CSS. We propose a method that automatically decides if a dynamic Web application passes a regression test case. The basic idea is to obtain a screenshot each time the GUI of the Web application (i.e. Web page) changes its state, and then compare each corresponding screenshot to see if they match. The evaluation results showed that the accuracy rate of our approach is high and our approach can be considered as fast enough for practical use.


2010 ◽  
Vol 2010 ◽  
pp. 1-18 ◽  
Author(s):  
Camila Loiola Brito Maia ◽  
Rafael Augusto Ferreira do Carmo ◽  
Fabrício Gomes de Freitas ◽  
Gustavo Augusto Lima de Campos ◽  
Jerffeson Teixeira de Souza

Modifications in software can affect some functionality that had been working until that point. In order to detect such a problem, the ideal solution would be testing the whole system once again, but there may be insufficient time or resources for this approach. An alternative solution is to order the test cases so that the most beneficial tests are executed first, in such a way only a subset of the test cases can be executed with little lost of effectiveness. Such a technique is known as regression test case prioritization. In this paper, we propose the use of the Reactive GRASP metaheuristic to prioritize test cases. We also compare this metaheuristic with other search-based algorithms previously described in literature. Five programs were used in the experiments. The experimental results demonstrated good coverage performance with some time overhead for the proposed technique. It also demonstrated a high stability of the results generated by the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document