scholarly journals Invariants and Robustness of BIP Models

10.29007/prxp ◽  
2018 ◽  
Author(s):  
Jan Olaf Blech ◽  
Thanh-Hung Nguyen ◽  
Michael Perin

In this paper we present on-going work addressing the problem of automatically generating realistic and guaranteed correct invariants. Since invariant generation mechanisms are error-prone, after the computation of invariants by a verification tool, we formally prove that the generated invariants are indeed invariants of the considered systems using a higher-order theorem prover and automated techniques. We regard invariants for BIP models. BIP (behavior, interaction, priority) is a language for specifying asynchronous component based systems. Proving that an invariant holds often requires an induction on possible system execution traces. For this reason, apart from generating invariants that precisely capture a system’s behavior, inductiveness of invariants is an important goal. We establish a notion of robust BIP models. These can be automatically constructed from our original non-robust BIP models and over-approximate their behavior. We motivate that invariants of robust BIP models capture the behavior of systems in a more natural way than invariants of corresponding non-robust BIP models. Robust BIP models take imprecision due to values delivered by sensors into account. Invariants of robust BIP models tend to be inductive and are also invariants of the original non-robust BIP model. Therefore they may be used by our verification tools and it is easy to show their correctness in a higher-order theorem prover. The presented work is developed to verify the results of a deadlock-checking tool for embedded systems after their computations. Therewith, we gain confidence in the provided analysis results.

Author(s):  
Jan Haltermann ◽  
Heike Wehrheim

AbstractSoftware verification has recently made enormous progress due to the development of novel verification methods and the speed-up of supporting technologies like SMT solving. To keep software verification tools up to date with these advances, tool developers keep on integrating newly designed methods into their tools, almost exclusively by re-implementing the method within their own framework. While this allows for a conceptual re-use of methods, it nevertheless requires novel implementations for every new technique.In this paper, we employ cooperative verification in order to avoid re-implementation and enable usage of novel tools as black-box components in verification. Specifically, cooperation is employed for the core ingredient of software verification which is invariant generation. Finding an adequate loop invariant is key to the success of a verification run. Our framework named CoVEGI allows a master verification tool to delegate the task of invariant generation to one or several specialized helper invariant generators. Their results are then utilized within the verification run of the master verifier, allowing in particular for crosschecking the validity of the invariant. We experimentally evaluate our framework on an instance with two masters and three different invariant generators using a number of benchmarks from SV-COMP 2020. The experiments show that the use of CoVEGI can increase the number of correctly verified tasks without increasing the used resources.


2019 ◽  
Vol 61 (4) ◽  
pp. 187-191
Author(s):  
Alexander Steen

Abstract Automated theorem proving systems validate or refute whether a conjecture is a logical consequence of a given set of assumptions. Higher-order provers have been successfully applied in academic and industrial applications, such as planning, software and hardware verification, or knowledge-based systems. Recent studies moreover suggest that automation of higher-order logic, in particular, yields effective means for reasoning within expressive non-classical logics, enabling a whole new range of applications, including computer-assisted formal analysis of arguments in metaphysics. My work focuses on the theoretical foundations, effective implementation and practical application of higher-order theorem proving systems. This article briefly introduces higher-order reasoning in general and presents an overview of the design and implementation of the higher-order theorem prover Leo-III. In the second part, some example applications of Leo-III are discussed.


1985 ◽  
Vol 10 (4) ◽  
pp. 72-74
Author(s):  
Bob Boyer ◽  
Matt Kaufmann

10.29007/x9c9 ◽  
2018 ◽  
Author(s):  
Nik Sultana ◽  
Christoph Benzmüller

The LEO and LEO-II provers have pioneered the integration of higher-order and first-order automated theorem proving. To date, the LEO-II system is, to our knowledge, the only automated higher-order theorem prover which is capable of generating joint higher-order–first-order proof objects in TPTP format. This paper discusses LEO-II’s proof objects. The target audience are practitioners with an interest in using LEO-II proofs within other systems.


Author(s):  
Vladimir Anatolyevich Gratinskiy ◽  
Evgeny Mikhailovich Novikov ◽  
Ilja Sergeevich Zakharov

Verification tools can produce various kinds of results while checking programs against requirement specifications. Experts, who seek for errors and estimate completeness of verification, mostly appreciate verdicts, violation witnesses and code coverage reports. They need convenient tools for automating the assessment of verification results to apply verification tools in practice when many program configurations and versions are checked against various requirements. In this paper, we propose new methods for expert evaluation of verification results, covering all those problems that are most significant in accordance with our experience in verifying large programs for compliance with a large number of requirements specifications. Some ideas are borrowed from the areas of testing and static analysis. However, specific methods and technical solutions are unique, since the verification results provided by verification tools are either not found in other areas or have special semantics. The paper presents our approaches and their implementation in the Klever software verification framework.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Timothy Bourke ◽  
Paul Jeanmaire ◽  
Basile Pesin ◽  
Marc Pouzet

Dataflow languages allow the specification of reactive systems by mutually recursive stream equations, functions, and boolean activation conditions called clocks. Lustre and Scade are dataflow languages for programming embedded systems. Dataflow programs are compiled by a succession of passes. This article focuses on the normalization pass which rewrites programs into the simpler form required for code generation. Vélus is a compiler from a normalized form of Lustre to CompCert’s Clight language. Its specification in the Coq interactive theorem prover includes an end-to-end correctness proof that the values prescribed by the dataflow semantics of source programs are produced by executions of generated assembly code. We describe how to extend Vélus with a normalization pass and to allow subsampled node inputs and outputs. We propose semantic definitions for the unrestricted language, divide normalization into three steps to facilitate proofs, adapt the clock type system to handle richer node definitions, and extend the end-to-end correctness theorem to incorporate the new features. The proofs require reasoning about the relation between static clock annotations and the presence and absence of values in the dynamic semantics. The generalization of node inputs requires adding a compiler pass to ensure the initialization of variables passed in function calls.


10.29007/tcvj ◽  
2018 ◽  
Author(s):  
Laura Kovács ◽  
Simon Robillard

In 2009, the symbol elimination method for loop invariant generationwas introduced, which used saturationtheorem proving in first-order logic to generate quantified invariantsof programs with arrays. Symbol elimination is fully automatic,requires no user guidance, and it is the first ever approach able togenerate invariants with alternations of quantifiers. In this paperwe describe a number of improvements and extensions to symbolelimination and invariant generation using first-order theoremproving, in particular the Vampire theorem prover. Rather than beinglimited to a specific programming language, our approach to reasoningabout loops in Vampire relies on a simple guarded command language forits input, which can be used as an interface for more complex andrealistic imperative languages. We propose new ways for extendingquantified loop properties describing valid loop properties, bysimplifying the properties over array updates and next staterelations. We also extend symbol elimination with pre- andpost-conditions of loops. We use the loop specification to generateonly invariants that are relevant, that is, invariants that are neededfor proving partial correctness of loops. Further, we turn symbolelimination into an automatic approach proving program correctness,providing an alternative method to Hoare-rule based loop verificationor other deductive systems. We present our newly redesignedimplementation of loop reasoning in Vampire and also report onexperimental results.


Author(s):  
Petar Vukmirović ◽  
Alexander Bentkamp ◽  
Jasmin Blanchette ◽  
Simon Cruanes ◽  
Visa Nummelin ◽  
...  

AbstractSuperposition is among the most successful calculi for first-order logic. Its extension to higher-order logic introduces new challenges such as infinitely branching inference rules, new possibilities such as reasoning about formulas, and the need to curb the explosion of specific higher-order rules. We describe techniques that address these issues and extensively evaluate their implementation in the Zipperposition theorem prover. Largely thanks to their use, Zipperposition won the higher-order division of the CASC-J10 competition.


2021 ◽  
Author(s):  
Claas Lorenz ◽  
Vera Clemens ◽  
Max Schrötter ◽  
Bettina Schnor

Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters are based on general applicable formal methods like Satifiability Modulo Theories (SMT) or theorem prover and show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification.<br>In this work, we address these challenges and present the concept of state shell interweaving to transform a stateful firewall rule set into a stateless rule set. This allows us to reuse any fast domain specific engine from the field of data plane verification tools leveraging smart, very fast, and domain specialized data structures and algorithms including Header Space Analysis (HSA). First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41.


Sign in / Sign up

Export Citation Format

Share Document