inductive invariants
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 15)

H-INDEX

7
(FIVE YEARS 1)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-29
Author(s):  
Hari Govind V K ◽  
Sharon Shoham ◽  
Arie Gurfinkel

This work addresses the problem of verifying imperative programs that manipulate data structures, e.g., Rust programs. Data structures are usually modeled by Algebraic Data Types (ADTs) in verification conditions. Inductive invariants of such programs often require recursively defined functions (RDFs) to represent abstractions of data structures. From the logic perspective, this reduces to solving Constrained Horn Clauses (CHCs) modulo both ADT and RDF. The underlying logic with RDFs is undecidable. Thus, even verifying a candidate inductive invariant is undecidable. Similarly, IC3-based algorithms for solving CHCs lose their progress guarantee: they may not find counterexamples when the program is unsafe. We propose a novel IC3-inspired algorithm Racer for solving CHCs modulo ADT and RDF (i.e., automatically synthesizing inductive invariants, as opposed to only verifying them as is done in deductive verification). Racer ensures progress despite the undecidability of the underlying theory, and is guaranteed to terminate with a counterexample for unsafe programs. It works with a general class of RDFs over ADTs called catamorphisms. The key idea is to represent catamorphisms as both CHCs, via relationification , and RDFs, using novel abstractions . Encoding catamorphisms as CHCs allows learning inductive properties of catamorphisms, as well as preserving unsatisfiabilty of the original CHCs despite the use of RDF abstractions, whereas encoding catamorphisms as RDFs allows unfolding the recursive definition, and relying on it in solutions. Abstractions ensure that the underlying theory remains decidable. We implement our approach in Z3 and show that it works well in practice.


2021 ◽  
Vol 5 (POPL) ◽  
pp. 1-30
Author(s):  
Yotam M. Y. Feldman ◽  
Mooly Sagiv ◽  
Sharon Shoham ◽  
James R. Wilcox
Keyword(s):  

Author(s):  
Alessandro Cimatti ◽  
Alberto Griggio ◽  
Gianluca Redondi

AbstractThe problem of invariant checking in parametric systems – which are required to operate correctly regardless of the number and connections of their components – is gaining increasing importance in various sectors, such as communication protocols and control software. Such systems are typically modeled using quantified formulae, describing the behaviour of an unbounded number of (identical) components, and their automatic verification often relies on the use of decidable fragments of first-order logic in order to effectively deal with the challenges of quantified reasoning.In this paper, we propose a fully automatic technique for invariant checking of parametric systems which does not rely on quantified reasoning. Parametric systems are modeled with array-based transition systems, and our method iteratively constructs a quantifier-free abstraction by analyzing, with SMT-based invariant checking algorithms for non-parametric systems, increasingly-larger finite instances of the parametric system. Depending on the verification result in the concrete instance, the abstraction is automatically refined by leveraging canditate lemmas from inductive invariants, or by discarding previously computed lemmas.We implemented the method using a quantifier-free SMT-based IC3 as underlying verification engine. Our experimental evaluation demonstrates that the approach is competitive with the state of the art, solving several benchmarks that are out of reach for other tools.


Author(s):  
Paolo Baldan ◽  
Richard Eggert ◽  
Barbara König ◽  
Tommaso Padoan

AbstractKnaster-Tarski’s theorem, characterising the greatest fix- point of a monotone function over a complete lattice as the largest post-fixpoint, naturally leads to the so-called coinduction proof principle for showing that some element is below the greatest fixpoint (e.g., for providing bisimilarity witnesses). The dual principle, used for showing that an element is above the least fixpoint, is related to inductive invariants. In this paper we provide proof rules which are similar in spirit but for showing that an element is above the greatest fixpoint or, dually, below the least fixpoint. The theory is developed for non-expansive monotone functions on suitable lattices of the form $$\mathbb {M}^Y$$ M Y , where Y is a finite set and $$\mathbb {M}$$ M an MV-algebra, and it is based on the construction of (finitary) approximations of the original functions. We show that our theory applies to a wide range of examples, including termination probabilities, behavioural distances for probabilistic automata and bisimilarity. Moreover it allows us to determine original algorithms for solving simple stochastic games.


2020 ◽  
Vol 56 (1-3) ◽  
pp. 154-177
Author(s):  
Grigory Fedyukovich ◽  
Samuel J. Kaufman ◽  
Rastislav Bodík

2020 ◽  
Vol 17 (6) ◽  
pp. 847-856
Author(s):  
Shengbing Ren ◽  
Xiang Zhang

The problem of synthesizing adequate inductive invariants lies at the heart of automated software verification. The state-of-the-art machine learning algorithms for synthesizing invariants have gradually shown its excellent performance. However, synthesizing disjunctive invariants is a difficult task. In this paper, we propose a method k++ Support Vector Machine (SVM) integrating k-means++ and SVM to synthesize conjunctive and disjunctive invariants. At first, given a program, we start with executing the program to collect program states. Next, k++SVM adopts k-means++ to cluster the positive samples and then applies SVM to distinguish each positive sample cluster from all negative samples to synthesize the candidate invariants. Finally, a set of theories founded on Hoare logic are adopted to check whether the candidate invariants are true invariants. If the candidate invariants fail the check, we should sample more states and repeat our algorithm. The experimental results show that k++SVM is compatible with the algorithms for Intersection Of Half-space (IOH) and more efficient than the tool of Interproc. Furthermore, it is shown that our method can synthesize conjunctive and disjunctive invariants automatically


2020 ◽  
Vol 64 (7) ◽  
pp. 1523-1552
Author(s):  
Daniel Neider ◽  
P. Madhusudan ◽  
Shambwaditya Saha ◽  
Pranav Garg ◽  
Daejun Park

Abstract We propose a framework for synthesizing inductive invariants for incomplete verification engines, which soundly reduce logical problems in undecidable theories to decidable theories. Our framework is based on the counterexample guided inductive synthesis principle and allows verification engines to communicate non-provability information to guide invariant synthesis. We show precisely how the verification engine can compute such non-provability information and how to build effective learning algorithms when invariants are expressed as Boolean combinations of a fixed set of predicates. Moreover, we evaluate our framework in two verification settings, one in which verification engines need to handle quantified formulas and one in which verification engines have to reason about heap properties expressed in an expressive but undecidable separation logic. Our experiments show that our invariant synthesis framework based on non-provability information can both effectively synthesize inductive invariants and adequately strengthen contracts across a large suite of programs. This work is an extended version of a conference paper titled “Invariant Synthesis for Incomplete Verification Engines”.


Author(s):  
Herbert Rocha ◽  
Rafael Menezes ◽  
Lucas C. Cordeiro ◽  
Raimundo Barreto

Abstract Map2Check is a software verification tool that combines fuzzing, symbolic execution, and inductive invariants. It automatically checks safety properties in C programs by adopting source code instrumentation to monitor data (e.g., memory pointers) from the program’s executions using LLVM compiler infrastructure. For SV-COMP 2020, we extended Map2Check to exploit an iterative deepening approach using LibFuzzer and Klee to check for safety properties. We also use Crab-LLVM to infer program invariants based on reachability analysis. Experimental results show that Map2Check can handle a wide variety of safety properties in several intricate verification tasks from SV-COMP 2020.


Author(s):  
Dirk Beyer ◽  
Sudeep Kanav

Abstract Program verification is the problem, for a given program $$P$$ and a specification $$\phi $$, of constructing a proof of correctness for the statement “program $$P$$ satisfies specification $$\phi $$” ($$P \models \phi $$) or a proof of violation ("Equation missing"). Usually, a correctness proof is based on inductive invariants, and a violation proof on a violating program trace. Verification engineers typically expect that a verification tool exports these proof artifacts. We propose to view the task of program verification as constructing a behavioral interface (represented e.g. by an automaton). We start with the interface $$I_{P}$$ of the program itself, which represents all traces of program executions. To prove correctness, we try to construct a more abstract interface $$I_{C}$$ of the program (overapproximation) that satisfies the specification. This interface, if found, represents more traces than $$I_{P}$$ that are all correct (satisfying the specification). Ultimately, we want a compact representation of the program behavior as a correctness interface $$I_{C}$$ in terms of inductive invariants. We can then extract a correctness witness, in standard exchange format, out of such a correctness interface. Symmetrically, to prove violation, we try to construct a more concrete interface $$I_{V}$$ of the program (underapproximation) that violates the specification. This interface, if found, represents fewer traces than $$I_{P}$$ that are all feasible (can be executed). Ultimately, we want a compact representation of the program behavior as a violation interface $$I_{V}$$ in terms of a violating program trace. We can then extract a violation witness, in standard exchange format, out of such a violation interface. This viewpoint exposes the duality of these two tasks — proving correctness and violation. It enables the decomposition of the verification process, and its tools, into (at least!) three components: interface synthesizers, refinement checkers, and specification checkers. We hope the reader finds this viewpoint useful, although the underlying ideas are not novel. We see it as a framework towards modular program verification.


Sign in / Sign up

Export Citation Format

Share Document