A Microservice Regression Testing Selection Approach Based on Belief Propagation

Author(s):  
Lizhe Chen ◽  
Ji Wu ◽  
Haiyan Yang ◽  
Kui Zhang

Abstract Regression testing is required in each iteration of microservice systems. Regression testing selection, which reduces testing costs by selecting a subset from the original test cases, is one of the main techniques to optimize regression testing. Existing techniques mainly rely on the information retrieved from artifacts such as code files and system models. For microservice systems with service autonomy, development method diversity and a large amount of services, such artifacts are too difficultly obtained and costly processed to apply those approaches. This paper presents a regression testing selection approach called MRTS-BP, which needs the API gateway layer logs instead of code files and system models as inputs. By parsing the API gateway layer logs, our approach establishes the service dependency matrix, which in further is transformed into a directed graph with the services as nodes. Then, to find out which test cases are affected by service changes, an algorithm based on belief propagation is presented to compute the quantitative results of service-change propagation from the directed graph. Finally, the relationships between original test cases and service-change propagation results are established to select test cases with three strategies. To evaluate the efficiency of MRTS-BP, the empirical study based on four microservice systems is presented. A typical technique RTS-CFG is compared with MRTS-CFG and four experiments are setup to investigate four research questions. The results show that MRTS-BP can not only reduce the number of test cases by half compared with the retest-all strategy while ensuring the safety, but also save at least 20% testing time costs more than that of RTS-CFG. MRTS-BP is more practical than the techniques relying on the artifacts when the latter cannot be implemented due to the artifacts are difficult to obtain and process.

Author(s):  
Elinda Kajo Mece ◽  
Kleona Binjaku ◽  
Hakik Paci

Regression testing is very important but also a very costly and time-consuming activity that ensures the developers that changes in the application will not bring new errors. Retest all, selection of test cases and prioritization of test cases (TCP)  approaches are used to enhance the efficiency and effectiveness in regression testing. While test case selection techniques decrease testing time and cost, it can exclude some critical test cases that can detect the faults. On the other hand, test case prioritization considers all test cases and execute them until resources are exhausted or all test cases are executed, while always focusing on the most important ones. Over the years, machine learning has found wide usage in solving different problems in software engineering. Software development and maintenance problems can be defined as learning problems and machine learning techniques have shown to be very effective in solving these problems. In the range of application of machine learning, machine learning techniques have also found usage in solving the test case prioritization problem. In this paper, we investigate the application of machine learning techniques in test case prioritization. We survey some of the most recent studies made in this field and provide information like techniques of machine learning used in TCP process, metrics used to measure the effectiveness of the proposed methods, data used to define the priority of test cases and some advantages or limitations of application of machine learning in TCP.


Author(s):  
Varun Gupta ◽  
Durg Singh Chauhan ◽  
Kamlesh Dutta

Regression testing has been studied by various researchers for developing and testing the quality of software. Regression testing aims at re-execution of evolved software code to ensure that no new errors had been introduced during the process of modification. Since re-execution of all test cases is not feasible, selecting manageable number of test cases to execute modified code with good fault detection rate is a problem. In past few years, various hybrid based regression testing approaches have been proposed and successfully employed for software testing, aiming at reduction in the number of test cases and higher fault detection capabilities. These techniques are based on sequence of selections, prioritizations and minimization of test suite. However, these techniques suffer from major drawbacks like improper consideration of control dependencies, neglection of unaffected fragments of code for testing purpose. Further, these techniques have been employed on hypothetical or simple programs with test suites of smaller size. Present paper proposes hybrid regression testing, a combination of test case selections, test case prioritizations and test suite minimization. The technique works at statement level and is based on finding the paths containing statements that affects or gets affected by the addition/deletion or modification (both control and data dependency) of variables in statements. The modification in the code may cause ripple effect thereby resulting into faulty execution of the code. The hybrid regression testing approach is aimed at detecting such faults with lesser number of test cases. Reduction in number of test cases is possible because of the decreased number of paths to be tested. A web based framework to automate and parallelize this testing technique to maximum extend, making it well suited for globally distributed environments is also proposed in the present paper. Framework when implemented as a tool can handle large pool of test cases and will make use of parallel MIMD architectures like multicore systems. Technique is applied on prototype live system and results are compared with recently proposed hybrid regression testing approach against parameters of interest. Obtained optimized results are indicators of effectiveness of approach in terms of reduction in effort, cost as well as testing time in general and increment delivery time in particular.


2020 ◽  
Author(s):  
Andreea Vescan ◽  
Camelia-M Pintea ◽  
Petrică C Pop

Abstract Regression testing is applied whenever a code changes, ensuring that the modifications fixed the fault and no other faults are introduced. Due to a large number of test cases to be run, test case prioritization is one of the strategies that allows to run the test cases with the highest fault rate first. The aim of the paper is to present an optimized test case prioritization method inspired by ant colony optimization, test case prioritization–ANT. The criteria used by the optimization algorithm are the number of faults not covered yet by the selected test cases and the sum of severity of the faults. The cost, i.e. time execution, for test cases is considered in the computation of the pheromone deposited on the graph’s edges. The average percentage of fault detected metric, as best selection criterion, is used to uncover maximum faults with the highest severity, and reducing the regression testing time. Several experiments are considered, detailed and discussed, comparing various algorithm parameter’s alternatives. A benchmark project is also used to validate the proposed approach. The obtained results are encouraging, being a cornerstone for new perspectives to be considered.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Lizhe Chen ◽  
Xiang Yu ◽  
Ji Wu ◽  
Haiyan Yang

Regression testing is the optimal technique that can be used in each iteration of microservice systems. However, regression testing prioritization is the only main method that gives better results. These techniques directly involve the processes of artifacts, data acquisition, analysis, and maintenance. The microservice systems have input data, which are difficult to obtain and control, while such processes are of high costs with impractical design. This paper gives a detailed study on testing prioritization technique, which is referred to as CIPC. As there are dependencies between services from API gateway logs, a novel CIPC algorithm is proposed, which is based on belief propagation. There are some rules that are directly affected by service changes. Therefore, the higher execution order of test case prioritizes CIPC, which is based on impact changes. Multiobjective prioritization algorithm is based on heuristic searching, in which sequence test cases are done by coverage. By evaluating the effectiveness of CIPC, the empirical study presents five microservice systems and four different techniques. The results describe that CIPC has improved fault detection rate with acceptable time and cost. The technique is more practical than typical artifacts, which are based on increments of system scales.


Author(s):  
Varun Gupta ◽  
D. S. Chauhan ◽  
Kamlesh Dutta

Mobile software application development process must be matured enough to handle the challenges (especially market related) associated with the development of high quality mobile software development. Ever increasing number of both mobile users and mobile applications had presented software engineers with the challenge of satisfying billions of users with high quality software applications to be delivered within deadline and budgets. Always there had been a lot of pressure to develop complex software categorized by thousands of requirements, under resource constrained environment. Requirement prioritization is one of the activities undertaken by software engineer to deliver partial software product to its customers such that most important requirements are implemented in the earliest releases. During next releases some changed and pending requirements are implemented, an activity that generates ripple effects. Such ripple effects need to be tested by executing modified source code against test cases of previous releases (regression testing). Regression testing is a very effortful activity that requires a software tester to select test cases that have high fault detection capability, execute the modified code against selected test cases and performing debugging. This regression testing activity can be lowered to the maximum extend by considering dependencies between requirements during the time of requirement prioritization. Thus requirement prioritization will be carried out not only against aspects like cost, time, risks, business values etc but against dependencies also. The aim is to implement almost all dependent highest priority requirements in current release so that implementation of new requirements is unlikely to have ripple effects. Changes in requirements might not be related to variable usage and definition and might not involve a change in functionality. In such cases there is no need to select already executed test cases of previous versions. Module dependencies can lead to test case selections of previous versions if changes of requirement lead to ripple effects. This paper aims to implement highest priority requirements such that regression testing is performed to minimum thereby improving development process of mobile applications. The proposed technique had been successfully evaluated on Android based notification software application that meets the specification of Aakash tablets.


2022 ◽  
Vol 31 (1) ◽  
pp. 1-50
Author(s):  
Jianyi Zhou ◽  
Junjie Chen ◽  
Dan Hao

Although regression testing is important to guarantee the software quality in software evolution, it suffers from the widely known cost problem. To address this problem, existing researchers made dedicated efforts on test prioritization, which optimizes the execution order of tests to detect faults earlier; while practitioners in industry leveraged more computing resources to save the time cost of regression testing. By combining these two orthogonal solutions, in this article, we define the problem of parallel test prioritization, which is to conduct test prioritization in the scenario of parallel test execution to reduce the cost of regression testing. Different from traditional sequential test prioritization, parallel test prioritization aims at generating a set of test sequences, each of which is allocated in an individual computing resource and executed in parallel. In particular, we propose eight parallel test prioritization techniques by adapting the existing four sequential test prioritization techniques, by including and excluding testing time in prioritization. To investigate the performance of the eight parallel test prioritization techniques, we conducted an extensive study on 54 open-source projects and a case study on 16 commercial projects from Baidu , a famous search service provider with 600M monthly active users. According to the two studies, parallel test prioritization does improve the efficiency of regression testing, and cost-aware additional parallel test prioritization technique significantly outperforms the other techniques, indicating that this technique is a good choice for practical parallel testing. Besides, we also investigated the influence of two external factors, the number of computing resources and time allowed for parallel testing, and find that more computing resources indeed improve the performance of parallel test prioritization. In addition, we investigated the influence of two more factors, test granularity and coverage criterion, and find that parallel test prioritization can still accelerate regression testing in parallel scenario. Moreover, we investigated the benefit of parallel test prioritization on the regression testing process of continuous integration, considering both the cumulative acceleration performance and the overhead of prioritization techniques, and the results demonstrate the superiority of parallel test prioritization.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ali M. Alakeel

Program assertions have been recognized as a supporting tool during software development, testing, and maintenance. Therefore, software developers place assertions within their code in positions that are considered to be error prone or that have the potential to lead to a software crash or failure. Similar to any other software, programs with assertions must be maintained. Depending on the type of modification applied to the modified program, assertions also might have to undergo some modifications. New assertions may also be introduced in the new version of the program, while some assertions can be kept the same. This paper presents a novel approach for test case prioritization during regression testing of programs that have assertions using fuzzy logic. The main objective of this approach is to prioritize the test cases according to their estimated potential in violating a given program assertion. To develop the proposed approach, we utilize fuzzy logic techniques to estimate the effectiveness of a given test case in violating an assertion based on the history of the test cases in previous testing operations. We have conducted a case study in which the proposed approach is applied to various programs, and the results are promising compared to untreated and randomly ordered test cases.


2021 ◽  
Vol 50 (3) ◽  
pp. 443-457
Author(s):  
Thamer Alrawashdeh ◽  
Fuad ElQirem ◽  
Ahmad Althunibat ◽  
Roba Alsoub

The regression testing is a software-based testing approach executed to verify that changes made to the softwaredo not affect the existing functionality of the product. On account of the constraints of time and cost, it isimpractical to re-execute all the test cases for software whenever a change occurs. In order to overcome sucha problem in the selection of regression test cases, a prioritization technique should be employed. On the basisof some predefined criterion, the prioritization techniques create an execution schedule for the test cases, sothe higher priority test cases can be performed earlier than the lower priority test cases in order to improvethe efficiency of the software testing. Many prioritization criteria for regression test cases have been proposedin software testing literature; however, most of such techniques are code-based. Keeping in view this fact, thisresearch work has proposed a prioritization approach for regression test cases generated from software specificationswhich are based on the criterion of the Average Percentage Transition Coverage (APTC) by using arevised genetic algorithm. This criterion evaluates the rate of transitions coverage by incorporating knowledgeabout the significance of transitions between activates in the form of weights. APTC has been used as a fitnessevaluation function in a genetic algorithm to measure the effectiveness of a test cases sequence. Moreover, inorder to improve the coverage percentage, the proposed approach has revised the genetic algorithm by solvingthe problem of the optimal local solution. The experimental results show that the proposed approach demonstratesa good coverage performance with less execution time as compared to the standard genetic algorithmand some other prioritization techniques.


Sign in / Sign up

Export Citation Format

Share Document