THE COMPLEXITY OF COVERAGE

2013 ◽  
Vol 24 (02) ◽  
pp. 165-185 ◽  
Author(s):  
KRISHNENDU CHATTERJEE ◽  
LUCA DE ALFARO ◽  
RUPAK MAJUMDAR

We study the problem of generating a test sequence that achieves maximal coverage for a reactive system under test. We formulate the problem as a repeated game between the tester and the system, where the system state space is partitioned according to some coverage criterion and the objective of the tester is to maximize the set of partitions (or coverage goals) visited during the game. We show the complexity of the maximal coverage problem for non-deterministic systems is PSPACE-complete, but is NP-complete for deterministic systems. For the special case of non-deterministic systems with a re-initializing “reset” action, which represent running a new test input on a re-initialized system, we show that the complexity is coNP-complete. Our proof technique for reset games uses randomized testing strategies that circumvent the exponentially large memory requirement of deterministic testing strategies. We also discuss the memory requirement for deterministic strategies and extensions of our results to other models, such as pushdown systems and timed systems.

1999 ◽  
Vol 45 (8) ◽  
pp. 1323-1330 ◽  
Author(s):  
George G Klee

Abstract The efficacy of endocrine tests depends on the choice of tests, the preparation of the patients, the integrity of the specimens, the quality of the measurements, and the validity of the reference data. Close dialogue among the clinicians, the laboratory, and the patients is a key factor for optimal patient care. The characteristics of urine and plasma samples and the advantages and limitations of paired test measurements are presented. The importance of test sequence strategies, provocative or inhibitory procedures, and elimination of drug interferences is illustrated with four cases involving Cushing syndrome, pheochromocytoma, primary aldosteronism, and hypercalcemia. For each of these scenarios, key clinical issues are highlighted, along with discussions of the best test strategies, including which medications are likely to interfere. The importance of targeting laboratory tests to answer well-focused clinical decisions is emphasized. The roles of some time-honored provocative procedures are questioned in light of more sensitive and specific analytic methods. The importance of decision-focused analytical tolerance limits is emphasized by demonstrating the impact of analytic bias on downstream medical resource utilization. User-friendly support systems to facilitate the implementation of test strategies and postanalytic tracking of patient outcomes are presented as essential requirements for quality medical practice.


Author(s):  
Rui Yang ◽  
Zhenyu Chen ◽  
Zhiyi Zhang ◽  
Baowen Xu

Model-based testing has been intensively and extensively studied in the past decades. Extended Finite State Machine (EFSM) is a widely used model of software testing in both academy and industry. This paper provides a survey on EFSM-based test case generation techniques in the last two decades. All techniques in EFSM-based test case generation are mainly classified into three parts: test sequence generation, test data generation, and test oracle construction. The key challenges, such as coverage criterion and feasibility analysis in EFSM-based test case generation are discussed. Finally, we summarize the research work and present several possible research areas in the future.


2020 ◽  
Vol 49 (4) ◽  
pp. 1129-1142
Author(s):  
Ghislain-Herman Demeze-Jouatsa

AbstractThis paper analyzes the set of pure strategy subgame perfect Nash equilibria of any finitely repeated game with complete information and perfect monitoring. The main result is a complete characterization of the limit set, as the time horizon increases, of the set of pure strategy subgame perfect Nash equilibrium payoff vectors of the finitely repeated game. This model includes the special case of observable mixed strategies.


Author(s):  
W. ERIC WONG ◽  
YU LEI

One common approach to test sequence generation for structurally testing concurrent programs involves constructing a reachability graph (RG) and selecting a set of paths from the graph to satisfy some coverage criterion. It is often suggested that test sequence generation methods for testing sequential programs based on a control flow graph (CFG) can also be used to select paths from an RG for testing concurrent programs. However, there is a major difference between these two, as the former suffers from a feasibility problem (i.e., some paths in a CFG may not be feasible at run-time) and the latter does not. As a result, even though test sequence generation methods for sequential programs can be applied to concurrent programs, they may not be efficient. We propose four methods — two based on hot spot prioritization and two based on topological sort — to effectively generate a small set of test sequences that covers all the nodes in an RG. The same methods are also applied to the corresponding dual graph for generating test sequences to cover all the edges. A case study was conducted to demonstrate the use of our methods.


2021 ◽  
Vol 2021 (12) ◽  
pp. 123203
Author(s):  
Gaia Pozzoli ◽  
Benjamin De Bruyne

Abstract We consider one-dimensional discrete-time random walks (RWs) in the presence of finite size traps of length ℓ over which the RWs can jump. We study the survival probability of such RWs when the traps are periodically distributed and separated by a distance L. We obtain exact results for the mean first-passage time and the survival probability in the special case of a double-sided exponential jump distribution. While such RWs typically survive longer than if they could not leap over traps, their survival probability still decreases exponentially with the number of steps. The decay rate of the survival probability depends in a non-trivial way on the trap length ℓ and exhibits an interesting regime when ℓ → 0 as it tends to the ratio ℓ/L, which is reminiscent of strongly chaotic deterministic systems. We generalize our model to continuous-time RWs, where we introduce a power-law distributed waiting time before each jump. In this case, we find that the survival probability decays algebraically with an exponent that is independent of the trap length. Finally, we derive the diffusive limit of our model and show that, depending on the chosen scaling, we obtain either diffusion with uniform absorption, or diffusion with periodically distributed point absorbers.


2018 ◽  
Vol 41 ◽  
Author(s):  
Daniel Crimston ◽  
Matthew J. Hornsey

AbstractAs a general theory of extreme self-sacrifice, Whitehouse's article misses one relevant dimension: people's willingness to fight and die in support of entities not bound by biological markers or ancestral kinship (allyship). We discuss research on moral expansiveness, which highlights individuals’ capacity to self-sacrifice for targets that lie outside traditional in-group markers, including racial out-groups, animals, and the natural environment.


Author(s):  
Dr. G. Kaemof

A mixture of polycarbonate (PC) and styrene-acrylonitrile-copolymer (SAN) represents a very good example for the efficiency of electron microscopic investigations concerning the determination of optimum production procedures for high grade product properties.The following parameters have been varied:components of charge (PC : SAN 50 : 50, 60 : 40, 70 : 30), kind of compounding machine (single screw extruder, twin screw extruder, discontinuous kneader), mass-temperature (lowest and highest possible temperature).The transmission electron microscopic investigations (TEM) were carried out on ultra thin sections, the PC-phase of which was selectively etched by triethylamine.The phase transition (matrix to disperse phase) does not occur - as might be expected - at a PC to SAN ratio of 50 : 50, but at a ratio of 65 : 35. Our results show that the matrix is preferably formed by the components with the lower melting viscosity (in this special case SAN), even at concentrations of less than 50 %.


2016 ◽  
Vol 32 (3) ◽  
pp. 204-214 ◽  
Author(s):  
Emilie Lacot ◽  
Mohammad H. Afzali ◽  
Stéphane Vautier

Abstract. Test validation based on usual statistical analyses is paradoxical, as, from a falsificationist perspective, they do not test that test data are ordinal measurements, and, from the ethical perspective, they do not justify the use of test scores. This paper (i) proposes some basic definitions, where measurement is a special case of scientific explanation; starting from the examples of memory accuracy and suicidality as scored by two widely used clinical tests/questionnaires. Moreover, it shows (ii) how to elicit the logic of the observable test events underlying the test scores, and (iii) how the measurability of the target theoretical quantities – memory accuracy and suicidality – can and should be tested at the respondent scale as opposed to the scale of aggregates of respondents. (iv) Criterion-related validity is revisited to stress that invoking the explanative power of test data should draw attention on counterexamples instead of statistical summarization. (v) Finally, it is argued that the justification of the use of test scores in specific settings should be part of the test validation task, because, as tests specialists, psychologists are responsible for proposing their tests for social uses.


Sign in / Sign up

Export Citation Format

Share Document