Generating efficient test sets with a model checker

Author(s):  
G. Hamon ◽  
L. de Moura ◽  
J. Rushby
Author(s):  
PAUL E. AMMANN ◽  
PAUL E. BLACK

Software developers use a variety of formal and informal methods, including testing, to argue that their systems are suitable for building high assurance applications. In this paper, we develop another connection between formal methods and testing by defining a specification-based coverage metric to evaluate test sets. Formal methods in the form of a model checker supply the necessary automation to make the metric practical. The metric gives the software developer assurance that a given test set is sufficiently sensitive to the structure of an application's specification. We also develop the necessary foundation for the metric and then illustrate the metric on an example.


VLSI Design ◽  
2001 ◽  
Vol 12 (4) ◽  
pp. 475-486
Author(s):  
Anshuman Chandra ◽  
Krishnendu Chakrabarty ◽  
Mark C. Hansen

We present novel test set encoding and pattern decompression methods for core-based systems. These are based on the use of twisted-ring counters and offer a number of important advantages–significant test compression (over 10X in many cases), less tester memory and reduced testing time, the ability to use a slow tester without compromising test quality or testing time, and no performance degradation for the core under test. Surprisingly, the encoded test sets obtained from partially-specified test sets (test cubes) are often smaller than the compacted test sets generated by automatic test pattern generation programs. Moreover, a large number of patterns are applied test-per-clock to cores, thereby increasing the likelihood of detecting non-modeled faults. Experimental results for the ISCAS benchmark circuits demonstrate that the proposed test architecture offers an attractive solution to the problem of achieving high test quality and low testing time with relatively slower, less expensive testers.


Diagnostica ◽  
2019 ◽  
Vol 65 (4) ◽  
pp. 193-204
Author(s):  
Johannes Baltasar Hessler ◽  
David Brieber ◽  
Johanna Egle ◽  
Georg Mandler ◽  
Thomas Jahn

Zusammenfassung. Der Auditive Wortlisten Lerntest (AWLT) ist Teil des Test-Sets Kognitive Funktionen Demenz (CFD; Cognitive Functions Dementia) im Rahmen des Wiener Testsystems (WTS). Der AWLT wurde entlang neurolinguistischer Kriterien entwickelt, um Interaktionen zwischen dem kognitiven Status der Testpersonen und den linguistischen Eigenschaften der Lernliste zu reduzieren. Anhand einer nach Alter, Bildung und Geschlecht parallelisierten Stichprobe von gesunden Probandinnen und Probanden ( N = 44) und Patientinnen und Patienten mit Alzheimer Demenz ( N = 44) wurde mit ANOVAs für Messwiederholungen überprüft, inwieweit dieses Konstruktionsziel erreicht wurde. Weiter wurde die Fähigkeit der Hauptvariablen des AWLT untersucht, zwischen diesen Gruppen zu unterscheiden. Es traten Interaktionen mit geringer Effektstärke zwischen linguistischen Eigenschaften und der Diagnose auf. Die Hauptvariablen trennten mit großen Effektstärken Patientinnen und Patienten von Gesunden. Der AWLT scheint bei vergleichbarer differenzieller Validität linguistisch fairer als ähnliche Instrumente zu sein.


2018 ◽  
Vol 21 (5) ◽  
pp. 381-387 ◽  
Author(s):  
Hossein Atabati ◽  
Kobra Zarei ◽  
Hamid Reza Zare-Mehrjardi

Aim and Objective: Human dihydroorotate dehydrogenase (DHODH) catalyzes the fourth stage of the biosynthesis of pyrimidines in cells. Hence it is important to identify suitable inhibitors of DHODH to prevent virus replication. In this study, a quantitative structure-activity relationship was performed to predict the activity of one group of newly synthesized halogenated pyrimidine derivatives as inhibitors of DHODH. Materials and Methods: Molecular structures of halogenated pyrimidine derivatives were drawn in the HyperChem and then molecular descriptors were calculated by DRAGON software. Finally, the most effective descriptors for 32 halogenated pyrimidine derivatives were selected using bee algorithm. Results: The selected descriptors using bee algorithm were applied for modeling. The mean relative error and correlation coefficient were obtained as 2.86% and 0.9627, respectively, while these amounts for the leave one out−cross validation method were calculated as 4.18% and 0.9297, respectively. The external validation was also conducted using two training and test sets. The correlation coefficients for the training and test sets were obtained as 0.9596 and 0.9185, respectively. Conclusion: The results of modeling of present work showed that bee algorithm has good performance for variable selection in QSAR studies and its results were better than the constructed model with the selected descriptors using the genetic algorithm method.


Author(s):  
Natasha Alechina ◽  
Hans van Ditmarsch ◽  
Rustam Galimullin ◽  
Tuo Wang

AbstractCoalition announcement logic (CAL) is one of the family of the logics of quantified announcements. It allows us to reason about what a coalition of agents can achieve by making announcements in the setting where the anti-coalition may have an announcement of their own to preclude the former from reaching its epistemic goals. In this paper, we describe a PSPACE-complete model checking algorithm for CAL that produces winning strategies for coalitions. The algorithm is implemented in a proof-of-concept model checker.


2021 ◽  
Vol 10 (3) ◽  
pp. 42
Author(s):  
Mohammed Al-Nuaimi ◽  
Sapto Wibowo ◽  
Hongyang Qu ◽  
Jonathan Aitken ◽  
Sandor Veres

The evolution of driving technology has recently progressed from active safety features and ADAS systems to fully sensor-guided autonomous driving. Bringing such a vehicle to market requires not only simulation and testing but formal verification to account for all possible traffic scenarios. A new verification approach, which combines the use of two well-known model checkers: model checker for multi-agent systems (MCMAS) and probabilistic model checker (PRISM), is presented for this purpose. The overall structure of our autonomous vehicle (AV) system consists of: (1) A perception system of sensors that feeds data into (2) a rational agent (RA) based on a belief–desire–intention (BDI) architecture, which uses a model of the environment and is connected to the RA for verification of decision-making, and (3) a feedback control systems for following a self-planned path. MCMAS is used to check the consistency and stability of the BDI agent logic during design-time. PRISM is used to provide the RA with the probability of success while it decides to take action during run-time operation. This allows the RA to select movements of the highest probability of success from several generated alternatives. This framework has been tested on a new AV software platform built using the robot operating system (ROS) and virtual reality (VR) Gazebo Simulator. It also includes a parking lot scenario to test the feasibility of this approach in a realistic environment. A practical implementation of the AV system was also carried out on the experimental testbed.


Sign in / Sign up

Export Citation Format

Share Document