scholarly journals Inherently Conservative Nonpolynomial-Based Remapping Schemes: Application to Semi-Lagrangian Transport

2008 ◽  
Vol 136 (12) ◽  
pp. 5044-5061 ◽  
Author(s):  
Matthew R. Norman ◽  
Ramachandran D. Nair

Abstract A group of new conservative remapping schemes based on nonpolynomial approximations is proposed. The remapping schemes rely on the conservative cascade scheme (CCS), which employs an efficient sequence of 1D remapping operations to solve a multidimensional problem. The present study adapts three new nonpolynomial-based reconstructions of subgrid variation to the CCS: the Piecewise Hyperbolic Method (PHM), the Piecewise Double Hyperbolic Method (PDHM), and the Piecewise Rational Method (PRM) for comparison with the baseline method: the Piecewise Parabolic Method (PPM). Additionally, an adaptive hybrid approximation scheme, PPM-Hybrid (PPM-H), is constructed using monotonic PPM for smooth data and local extrema and using PHM for steep jumps where PPM typically suffers large accuracy degradation because of its original monotonic filter. Smooth and nonsmooth data profiles are transported in 1D, 2D Cartesian, and 2D spherical frameworks under uniform advection, solid-body rotation, and deformational flow. Accuracy is compared via the L1 global error norm. In general, PPM outperformed PHM, but when the majority of the error came from PPM degradation at sharp derivative changes (e.g., the vicinity near sine wave extrema), PHM was more accurate. PRM performed very similarly to PPM for nonsmooth functions, but the order of convergence was worse than PPM for smoother data. PDHM performed the worst of all of the nonpolynomial methods for nearly every test case. PPM-H outperformed PPM and all of the nonpolynomial methods for all test cases in all geometries, offering a robust advantage in the CCS scheme with a negligible increase in computational time.

2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

Software Product Lines(SPLs) covers a mixture of features for testing Software Application Program(SPA). Testing cost reduction is a major metric of software testing. In combinatorial testing(CT), maximization of fault type coverage and test suite reduction plays a key role to reduce the testing cost of SPA. Metaheuristic Genetic Algorithm(GA) do not offer best outcome for test suite optimization problem due to mutation operation and required more computational time. So, Fault-Type Coverage Based Ant Colony Optimization(FTCBACO) algorithm is offered for test suite reduction in CT. FTCBACO algorithm starts with test cases in test suite and assign separate ant to each test case. Ants elect best test cases by updating of pheromone trails and selection of higher probability trails. Best test case path of ant with least time are taken as optimal solution for performing CT. Hence, FTCBACO Technique enriches reduction rate of test suite and minimizes computational time of reducing test cases efficiently for CT.


2021 ◽  
Vol 26 (4) ◽  
Author(s):  
Man Zhang ◽  
Bogdan Marculescu ◽  
Andrea Arcuri

AbstractNowadays, RESTful web services are widely used for building enterprise applications. REST is not a protocol, but rather it defines a set of guidelines on how to design APIs to access and manipulate resources using HTTP over a network. In this paper, we propose an enhanced search-based method for automated system test generation for RESTful web services, by exploiting domain knowledge on the handling of HTTP resources. The proposed techniques use domain knowledge specific to RESTful web services and a set of effective templates to structure test actions (i.e., ordered sequences of HTTP calls) within an individual in the evolutionary search. The action templates are developed based on the semantics of HTTP methods and are used to manipulate the web services’ resources. In addition, we propose five novel sampling strategies with four sampling methods (i.e., resource-based sampling) for the test cases that can use one or more of these templates. The strategies are further supported with a set of new, specialized mutation operators (i.e., resource-based mutation) in the evolutionary search that take into account the use of these resources in the generated test cases. Moreover, we propose a novel dependency handling to detect possible dependencies among the resources in the tested applications. The resource-based sampling and mutations are then enhanced by exploiting the information of these detected dependencies. To evaluate our approach, we implemented it as an extension to the EvoMaster tool, and conducted an empirical study with two selected baselines on 7 open-source and 12 synthetic RESTful web services. Results show that our novel resource-based approach with dependency handling obtains a significant improvement in performance over the baselines, e.g., up to + 130.7% relative improvement (growing from + 27.9% to + 64.3%) on line coverage.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1779
Author(s):  
Wanida Khamprapai ◽  
Cheng-Fa Tsai ◽  
Paohsi Wang ◽  
Chi-En Tsai

Test case generation is an important process in software testing. However, manual generation of test cases is a time-consuming process. Automation can considerably reduce the time required to create adequate test cases for software testing. Genetic algorithms (GAs) are considered to be effective in this regard. The multiple-searching genetic algorithm (MSGA) uses a modified version of the GA to solve the multicast routing problem in network systems. MSGA can be improved to make it suitable for generating test cases. In this paper, a new algorithm called the enhanced multiple-searching genetic algorithm (EMSGA), which involves a few additional processes for selecting the best chromosomes in the GA process, is proposed. The performance of EMSGA was evaluated through comparison with seven different search-based techniques, including random search. All algorithms were implemented in EvoSuite, which is a tool for automatic generation of test cases. The experimental results showed that EMSGA increased the efficiency of testing when compared with conventional algorithms and could detect more faults. Because of its superior performance compared with that of existing algorithms, EMSGA can enable seamless automation of software testing, thereby facilitating the development of different software packages.


2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Kevin M. Betts ◽  
Mikel D. Petty

Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing) and the method most commonly used today (Monte Carlo testing). The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1) finding the single most challenging test case and (2) finding the set of fifty test cases with the highest mean degree of challenge.


Author(s):  
RUBING HUANG ◽  
XIAODONG XIE ◽  
DAVE TOWEY ◽  
TSONG YUEH CHEN ◽  
YANSHENG LU ◽  
...  

Combinatorial interaction testing is a well-recognized testing method, and has been widely applied in practice, often with the assumption that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, an alternative assumption may be that some test cases are more likely to reveal failure, thus making the order of executing the test cases critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number of uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases.


Author(s):  
Arpita Dutta ◽  
Amit Jha ◽  
Rajib Mall

Fault localization techniques aim to localize faulty statements using the information gathered from both passed and failed test cases. We present a mutation-based fault localization technique called MuSim. MuSim identifies the faulty statement based on its computed proximity to different mutants. We study the performance of MuSim by using four different similarity metrics. To satisfactorily measure the effectiveness of our proposed approach, we present a new evaluation metric called Mut_Score. Based on this metric, on an average, MuSim is 33.21% more effective than existing fault localization techniques such as DStar, Tarantula, Crosstab, Ochiai.


Author(s):  
Jason D. Miller ◽  
David J. Buckmaster ◽  
Katherine Hart ◽  
Timothy J. Held ◽  
David Thimsen ◽  
...  

Increasing the efficiency of coal-fired power plants is vital to reducing electricity costs and emissions. Power cycles employing supercritical carbon dioxide (sCO2) as the working fluid have the potential to increase power cycle efficiency by 3–5% points over state-of-the-art oxy-combustion steam-Rankine cycles operating under comparable conditions. To date, the majority of studies have focused on the integration and optimization of sCO2 power cycles in waste heat, solar, or nuclear applications. The goal of this study is to demonstrate the potential of sCO2 power cycles, and quantify the power cycle efficiency gains that can be achieved versus the state-of-the-art steam-Rankine cycles employed in oxy-fired coal power plants. Turbine inlet conditions were varied among the sCO2 test cases and compared with existing Department of Energy (DOE)/National Energy Technology6 Laboratory (NETL) steam base cases. Two separate sCO2 test cases were considered and the associated flow sheets developed. The turbine inlet conditions for this study were chosen to match conditions in a coal-fired ultra-supercritical steam plant (Tinlet = 593°C, Pinlet = 24.1 MPa) and an advanced ultra-supercritical steam plant (Tinlet = 730°C, Pinlet = 27.6 MPa). A plant size of 550 MWe, was selected to match available information on existing DOE/NETL bases cases. The effects of cycle architecture, combustion-air preheater temperature, and cooling source type were considered subject to comparable heat source and reference conditions taken from the steam Rankine reference cases. Combinations and variants of sCO2 power cycles — including cascade and recompression and variants with multiple reheat and compression steps — were considered with varying heat-rejection subsystems — air-cooled, direct cooling tower, and indirect-loop cooling tower. Where appropriate, combustion air preheater inlet temperature was also varied. Through use of a multivariate nonlinear optimization design process that considers both performance and economic impacts, curves of minimum cost versus efficiency were generated for each sCO2 test case and combination of architecture and operational choices. These curves indicate both peak theoretical efficiency and suggest practical limits based on incremental cost versus performance. For a given test case, results for individual architectural and operational options give insight to cost and performance improvements from step-changes in system complexity and design, allowing down selection of candidate architectures. Optimized designs for each test case were then selected based on practical efficiency limits within the remaining candidate architectures and compared to the relevant baseline steam plant. sCO2 cycle flowsheets are presented for each optimized design.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ali M. Alakeel

Program assertions have been recognized as a supporting tool during software development, testing, and maintenance. Therefore, software developers place assertions within their code in positions that are considered to be error prone or that have the potential to lead to a software crash or failure. Similar to any other software, programs with assertions must be maintained. Depending on the type of modification applied to the modified program, assertions also might have to undergo some modifications. New assertions may also be introduced in the new version of the program, while some assertions can be kept the same. This paper presents a novel approach for test case prioritization during regression testing of programs that have assertions using fuzzy logic. The main objective of this approach is to prioritize the test cases according to their estimated potential in violating a given program assertion. To develop the proposed approach, we utilize fuzzy logic techniques to estimate the effectiveness of a given test case in violating an assertion based on the history of the test cases in previous testing operations. We have conducted a case study in which the proposed approach is applied to various programs, and the results are promising compared to untreated and randomly ordered test cases.


2018 ◽  
Vol 90 (8) ◽  
pp. 1221-1226
Author(s):  
Sreedhar Karunakaran

Purpose The purpose of this paper is to explore various in-flight crew escape options of a prototype transport aircraft and finalize the option offering safest crew egress for different combinations of contingencies and flight conditions. Design/methodology/approach Various egress options were explored through simulation in a computational fluid dynamics (CFD) software using aircraft 3D CAD model and scalable digital mannequins. For this, certain important contingencies which best describe the extreme aircraft behaviour were identified. Crew escape options, which have least external interference in expected egress trajectory, were selected. Several test simulations representing each feasible combination of contingency, escape option and flight condition were simulated. The option which offers safe crew escape in each test case is deemed to be the safest egress option for the test aircraft. Findings Among five options explored, crew escape through forward ventral hatch provided the safest crew escape for all test cases. The selected option was validated for robustness with additional test cases modelling different anthropometric characteristics of 5th and 50th percentile pilot populations with different postures. Originality/value In-flight validation of safe crew escape option is infeasible by actual trial. Exploration of safe crew options for required number of test cases by any analytical method or by wind tunnels tests is tedious, time consuming and extremely expensive. On the other hand, exploration of safest crew option by CFD, besides being first of its kind, provides convenient option to configure, test and validate different test cases with unmatched benefits in time, cost and simplicity.


Test case prioritization (TCP) is a software testing technique that finds an ideal ordering of test cases for regression testing, so that testers can obtain the maximum benefit of their test suite, even if the testing process is stop at some arbitrary point. The recent trend of software development uses OO paradigm. This paper proposed a cost-cognizant TCP approach for object-oriented software that uses path-based integration testing. Path-based integration testing will identify the possible execution path and extract these paths from the Java System Dependence Graph (JSDG) model of the source code using forward slicing technique. Afterward evolutionary algorithm (EA) was employed to prioritize test cases based on the severity detection per unit cost for each of the dependent faults. The proposed technique was known as Evolutionary Cost-Cognizant Regression Test Case Prioritization (ECRTP) and being implemented as regression testing approach for experiment.


Sign in / Sign up

Export Citation Format

Share Document