test execution
Recently Published Documents


TOTAL DOCUMENTS

182
(FIVE YEARS 62)

H-INDEX

10
(FIVE YEARS 2)

2022 ◽  
Vol 31 (1) ◽  
pp. 1-50
Author(s):  
Jianyi Zhou ◽  
Junjie Chen ◽  
Dan Hao

Although regression testing is important to guarantee the software quality in software evolution, it suffers from the widely known cost problem. To address this problem, existing researchers made dedicated efforts on test prioritization, which optimizes the execution order of tests to detect faults earlier; while practitioners in industry leveraged more computing resources to save the time cost of regression testing. By combining these two orthogonal solutions, in this article, we define the problem of parallel test prioritization, which is to conduct test prioritization in the scenario of parallel test execution to reduce the cost of regression testing. Different from traditional sequential test prioritization, parallel test prioritization aims at generating a set of test sequences, each of which is allocated in an individual computing resource and executed in parallel. In particular, we propose eight parallel test prioritization techniques by adapting the existing four sequential test prioritization techniques, by including and excluding testing time in prioritization. To investigate the performance of the eight parallel test prioritization techniques, we conducted an extensive study on 54 open-source projects and a case study on 16 commercial projects from Baidu , a famous search service provider with 600M monthly active users. According to the two studies, parallel test prioritization does improve the efficiency of regression testing, and cost-aware additional parallel test prioritization technique significantly outperforms the other techniques, indicating that this technique is a good choice for practical parallel testing. Besides, we also investigated the influence of two external factors, the number of computing resources and time allowed for parallel testing, and find that more computing resources indeed improve the performance of parallel test prioritization. In addition, we investigated the influence of two more factors, test granularity and coverage criterion, and find that parallel test prioritization can still accelerate regression testing in parallel scenario. Moreover, we investigated the benefit of parallel test prioritization on the regression testing process of continuous integration, considering both the cumulative acceleration performance and the overhead of prioritization techniques, and the results demonstrate the superiority of parallel test prioritization.


Author(s):  
Svitlana Zubchenko ◽  
Iryna Kril ◽  
Marta Lomikovska ◽  
Anna Havrylyuk ◽  
Kristina Lischuk-Yakymovych ◽  
...  

The current development of the pharmaceutical industry in the synthesis of new chemical compounds, standardized treatment protocols, and disease prevention can lead to a progressive increase in drug hypersensitivity reactions, which often have serious consequences for human health. Increasing evidence of involvement of infections, including Herpesviridae viruses, in the development of drug hypersensitivity reactions is known.The method of flow cytometry can be used, in particular, the basophil activation test to diagnose drug hypersensitivity reactions. The anamnestic, clinical, andlaboratory data of 368 people were analyzed for the selection of patients at risk of drug hypersensitivity for the basophil degranulation test execution. It was found that among patients hypersensitivity reactions were most often detected to antibiotics (50.0%), radiopaque substances (27.7%), perioperative drugs, local anesthetics - 13.6% each. Clinical manifestations of these reactions were urticaria with angioneurotic edema (40.6%), urticaria (28.1%), anaphylaxis (21.9%), obstructive bronchitis chenges (9.37%). According to anamnestic and clinical-laboratory data, patients with a high risk of drug hypersensitivity reactions revealed frequent manifestations of herpesvirus infection HSV1 (34.4%), active chronic persistence of EBV (59.4%), accompanied by manifestations of EBV-associated secondary immune disorders and prevalence of chronic EBV infection in all patients.


Author(s):  
Ammar Kareem Alazzawi ◽  
Helmi Md Rais ◽  
Shuib Basri ◽  
Yazan A. Alsariera ◽  
Luiz Fernando Capretz ◽  
...  

Search-based software engineering that involves the deployment of meta-heuristics in applicable software processes has been gaining wide attention. Recently, researchers have been advocating the adoption of meta-heuristic algorithms for t-way testing strategies (where t points the interaction strength among parameters). Although helpful, no single meta-heuristic based t-way strategy can claim dominance over its counterparts. For this reason, the hybridization of meta-heuristic algorithms can help to ascertain the search capabilities of each by compensating for the limitations of one algorithm with the strength of others. Consequently, a new meta-heuristic based t-way strategy called Hybrid Artificial Bee Colony (HABCSm) strategy, based on merging the advantages of the Artificial Bee Colony (ABC) algorithm with the advantages of a Particle Swarm Optimization (PSO) algorithm is proposed in this paper. HABCSm is the first t-way strategy to adopt Hybrid Artificial Bee Colony (HABC) algorithm with Hamming distance as its core method for generating a final test set and the first to adopt the Hamming distance as the final selection criterion for enhancing the exploration of new solutions. The experimental results demonstrate that HABCSm provides superior competitive performance over its counterparts. Therefore, this finding contributes to the field of software testing by minimizing the number of test cases required for test execution.


2021 ◽  
Author(s):  
Carlo Busollo ◽  
Stefano Mauro ◽  
Andrea Nesci ◽  
Leonardo Sabatino Scimmi ◽  
Emanuele Baronio

Abstract Objective Digitalization is offering several chances to improve performance and reliability of Underground Gas Storage (UGS) infrastructures, especially in those sites where ageing would require investment improvement for maintenance and monitoring. In that context, well integrity management can benefit from the implementation of a well digital twin, integrated with real time monitoring. The work proposes a digital model of the well that can provide a valuable tool to analyse its non stationary states in order to evaluate the integrity of the barriers and its health state. Methods, Procedures, Process The key points on well integrity management are barriers testing/qualification and annular pressure monitoring, and in UGS operations it’s crucial the selection of the timing of barrier assessment and of diagnostic test execution to correctly evaluates the results. The digital model can provide a tool to help the well engineer to understand the health state of the well and to plan maintenance activities. It considers a physical model of the well composed by gas and liquid filled chambers in the annuluses and in the tubing case and all the potential leak paths that could connect the annuluses, the tubing case, and the reservoir to the external environment. Each chamber is modelled considering its mass and energy balance, while fluid resistances describe fluid leakage across the barriers. Appropriate models, selected according to the geometry and type of each well barrier, describe each fluid resistance. The input parameters are the well architecture, flowing tubing temperature and pressure and gas flow rate. The model provides pressure and temperatures trends and estimates of leak rates trends or annular liquid level movements during the observation time window. The fine tuning of the model of each well is carried out by seeking for the values of the parameters that best describe each single leak path, such as size and position of the leaking point, with a genetic algorithm. Results, Observations, Conclusions The model has been customised and validated over several wells, some of which with perfect integrity status and others with some integrity issues. Results showed a very good fit with field data, as well as high precision in identifying leak position and size. The tool can also be applied to forecast well behaviour after the application of mitigating action or to simulate the evolution of the leak. Example applications are the evaluation of the correct time to top up a casing with liquid or nitrogen or the effect on annular pressure of limiting withdrawal or injection flow rate.


Author(s):  
Shouvick Mondal ◽  
Silva Denini ◽  
d'Amorim Marcelo
Keyword(s):  

2021 ◽  
Author(s):  
Shouvick Mondal ◽  
Denini Silva ◽  
Marcelo d'Amorim
Keyword(s):  

IET Software ◽  
2021 ◽  
Author(s):  
Foivos Tsimpourlas ◽  
Gwenyth Rooijackers ◽  
Ajitha Rajan ◽  
Miltiadis Allamanis

2021 ◽  
Author(s):  
Emmanuel Udofia

Abstract Well testing could be described as a process required to calculate the volumes of (oil, water and gas) production from a well in a bid to identify the current state of the well. Amongst other things, well testing aims to provide information for effective Well, Reservoir and Facility Management. Normally, as a means of well performance health-check, reconciliation factor (RF) is generated by comparing the fiscal production volume against the theoretical well test volume. Experiences from the Coronavirus pandemic has brought about the new normal into well test execution. In deepwater environment, the process of well testing is more challenging and this paper aims to address these challenges and propose optimum well test frequency for deepwater operations. It is usually required that routine well test be conducted once every month on all flowing strings, this is for statutory compliance and well health-check purposes. However, in deepwater environment, it is difficult to comply with this periodic well test requirement mainly due to production flow line slugging, plant process upset and/or tripping resulting in production deferment and operational risk exposure. Furthermore, to carry out well test in deepwater operation, production cutback is required for flow assurance purpose and this usually results in huge production deferment. In this field of interest, this challenge has been managed by deploying a data-driven application to monitor production on individual flowing strings in real-time thereby optimizing the frequency of well test on every flowing well. Varying rate well test data are captured and used to calibrate this tool or application for subsequent real-time production monitoring. This initiative ensures that all the challenges earlier mentioned are managed while actually optimizing the frequency of testing the wells using intelligent application which serves as a ‘virtual meter’ for testing all producing wells in real time. As mentioned, well testing in most deepwater assets remain a big challenge but this project based field experience has ensured effective well testing operation resulting in reduction of production deferment and safety exposure during plant tripping whilst optimizing frequency of testing the wells. Following this achievement of the optimized well test to quarterly frequency in this field in Nigerian deepwater, recommendation from this paper will assist other deepwater field operators in managing routine well testing operation optimally.


Vehicles ◽  
2021 ◽  
Vol 3 (3) ◽  
pp. 426-447
Author(s):  
Alexander Strassheim

As long as road accidents happen, passive safety systems like the airbag control unit are an essential part of the whole automotive safety system. Within the airbag control unit, the event data recorder (EDR) is an integrated function. Recent developments in legislation show that an increasing number of EDR-related regulations are being introduced. They are mainly focusing on data recording, crash-data retrieval, and some of them define testing aspects. In the system testing of an airbag control unit with a focus on the event data recorder, the question arises of how to deal with the fact that real-world crash events are not “straightforward” but arbitrary and do not follow any rules and restrictions. The purpose of this work is to develop a robust test approach to these conditions—giving a tester the possibility to extend the test depth considering the common test design techniques and testing principles. The applied methodology is the use of optimization algorithms in an automated test environment. With this, the tester can steer the test execution in a predefined way with minimal interaction. The application of the developed test method automatically creates a set of test data which fulfill the predefined conditions by the user. The generated results show that a high number of test data are created at and close to the target condition. Consequently, this test approach provides an extension to the common test design techniques with regard to how test input data can be created, and especially how automated test data creation and test execution can be realized.


Sign in / Sign up

Export Citation Format

Share Document