A New Approach in Model-Based Testing: Designing Test Models in TTCN-3

Author(s):  
Antal Wu-Hen-Chang ◽  
Gusztáv Adamis ◽  
Levente Erős ◽  
Gábor Kovács ◽  
Tibor Csöndes
2019 ◽  
pp. 408-421
Author(s):  
Evelin Halling ◽  
Jüri Vain ◽  
Artem Boyarchuk ◽  
Oleg Illiashenko

In mission critical systems a single failure might cause catastrophic consequences. This sets high expectations to timely detection of design faults and runtime failures. By traditional software testing methods the detection of deeply nested faults that occur sporadically is almost impossible. The discovery of such bugs can be facilitated by generating well-targeted test cases where the test scenario is explicitly specified. On the other hand, the excess of implementation details in manually crafted test scripts makes it hard to understand and to interpret the test results. This paper defines high-level test scenario specification language TDLTP for specifying complex test scenarios that are relevant for model-based testing of mission critical systems. The syntax and semantics of TDLTP operators are defined and the transformation rules that map its declarative expressions to executable Uppaal Timed Automata test models are specified. The scalability of the method is demonstrated on the TUT100 satellite software integration testing case study.


Author(s):  
Qaisar A. Malik ◽  
Antti Jääskeläinen ◽  
Heikki Virtanen ◽  
Mika Katara ◽  
Fredrik Abbors ◽  
...  

2021 ◽  
Author(s):  
Orlando Schwery ◽  
Brian C. O’Meara

AbstractTo investigate how biodiversity arose, the field of macroevolution largely relies on model-based approaches to estimate rates of diversification and what factors influence them. The number of available models is rising steadily, facilitating the modeling of an increasing number of possible diversification dynamics, and multiple hypotheses relating to what fueled or stifled lineage accumulation within groups of organisms. However, growing concerns about unchecked biases and limitations in the employed models suggest the need for rigorous validation of methods used to infer. Here, we address two points: the practical use of model adequacy testing, and what model adequacy can tell us about the overall state of diversification models. Using a large set of empirical phylogenies, and a new approach to test models using aspects of tree shape, we test how a set of staple models performs with regards to adequacy. Patterns of adequacy are described across trees and models and causes for inadequacy – particularly if all models are inadequate – are explored. The findings make clear that overall, only few empirical phylogenies cannot be described by at least one model. However, finding that the best fitting of a set of models might not necessarily be adequate makes clear that adequacy testing should become a step in the standard procedures for diversification studies.


2011 ◽  
Vol 34 (6) ◽  
pp. 1012-1028 ◽  
Author(s):  
Huai-Kou MIAO ◽  
Sheng-Bo CHEN ◽  
Hong-Wei ZENG

Author(s):  
Marlon Vieira ◽  
Xiping Song ◽  
Gilberto Matos ◽  
Stephan Storck ◽  
Rajanikanth Tanikella ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document