Labelled cyclic proofs for separation logic

Author(s):  
Didier Galmiche ◽  
Daniel Méry

Abstract Separation logic (SL) is a logical formalism for reasoning about programs that use pointers to mutate data structures. It is successful for program verification as an assertion language to state properties about memory heaps using Hoare triples. Most of the proof systems and verification tools for ${\textrm{SL}}$ focus on the decidable but rather restricted symbolic heaps fragment. Moreover, recent proof systems that go beyond symbolic heaps are purely syntactic or labelled systems dedicated to some fragments of ${\textrm{SL}}$ and they mainly allow either the full set of connectives, or the definition of arbitrary inductive predicates, but not both. In this work, we present a labelled proof system, called ${\textrm{G}_{\textrm{SL}}}$, that allows both the definition of cyclic proofs with arbitrary inductive predicates and the full set of SL connectives. We prove its soundness and show that we can derive in ${\textrm{G}_{\textrm{SL}}}$ the built-in rules for data structures of another non-cyclic labelled proof system and also that ${\textrm{G}_{\textrm{SL}}}$ is strictly more powerful than that system.

2021 ◽  
Vol Volume 17, Issue 3 ◽  
Author(s):  
Stéphane Demri ◽  
Étienne Lozes ◽  
Alessio Mansutti

We present the first complete axiomatisation for quantifier-free separation logic. The logic is equipped with the standard concrete heaplet semantics and the proof system has no external feature such as nominals/labels. It is not possible to rely completely on proof systems for Boolean BI as the concrete semantics needs to be taken into account. Therefore, we present the first internal Hilbert-style axiomatisation for quantifier-free separation logic. The calculus is divided in three parts: the axiomatisation of core formulae where Boolean combinations of core formulae capture the expressivity of the whole logic, axioms and inference rules to simulate a bottom-up elimination of separating connectives, and finally structural axioms and inference rules from propositional calculus and Boolean BI with the magic wand.


2015 ◽  
Vol 8 (2) ◽  
pp. 296-305 ◽  
Author(s):  
NISSIM FRANCEZ

AbstractThe paper proposes an extension of the definition of a canonical proof, central to proof-theoretic semantics, to a definition of a canonical derivation from open assumptions. The impact of the extension on the definition of (reified) proof-theoretic meaning of logical constants is discussed. The extended definition also sheds light on a puzzle regarding the definition of local-completeness of a natural-deduction proof-system, underlying its harmony.


10.29007/dkxs ◽  
2018 ◽  
Author(s):  
Emanuele De Angelis ◽  
Fabio Fioravanti ◽  
Alberto Pettorossi ◽  
Maurizio Proietti

The transformation of constraint logic programs (CLP programs)has been shown to be an effective methodologyfor verifying properties of imperative programs.By following this methodology, we encode the negationof a partial correctness property of an imperativeprogram prog as a predicate incorrect defined by a CLP program P, and we show thatprog is correct by transforming P intothe empty program through the applicationof semantics preserving transformation rules.Some of these rules perform replacements of constraintsthat encode properties of the data structures manipulatedby the program prog.In this paper we show that Constraint Handling Rules (CHR)are a suitable formalism for representing and applyingconstraint replacements during the transformation of CLP programs.In particular, we consider programs that manipulate integerarrays and we present a CHR encoding of a constraint replacementstrategy based on the theory of arrays.We also propose a novel generalization strategy forconstraints on integer arrays that combinesthe CHR constraint replacement strategywith various generalization operator for linear constraints,such as widening and convex hull.Generalization is controlled by additional constraintsthat relate the variable identifiers in the imperativeprogram and the CLP representation of their values.The method presented in this paper has been implemented andwe have demonstrated itseffectiveness on a set ofbenchmark programs taken from the literature.


2021 ◽  
Vol 43 (4) ◽  
pp. 1-134
Author(s):  
Emanuele D’Osualdo ◽  
Julian Sutherland ◽  
Azadeh Farzan ◽  
Philippa Gardner

We present TaDA Live, a concurrent separation logic for reasoning compositionally about the termination of blocking fine-grained concurrent programs. The crucial challenge is how to deal with abstract atomic blocking : that is, abstract atomic operations that have blocking behaviour arising from busy-waiting patterns as found in, for example, fine-grained spin locks. Our fundamental innovation is with the design of abstract specifications that capture this blocking behaviour as liveness assumptions on the environment. We design a logic that can reason about the termination of clients that use such operations without breaking their abstraction boundaries, and the correctness of the implementations of the operations with respect to their abstract specifications. We introduce a novel semantic model using layered subjective obligations to express liveness invariants and a proof system that is sound with respect to the model. The subtlety of our specifications and reasoning is illustrated using several case studies.


2007 ◽  
Vol 17 (3) ◽  
pp. 439-484 ◽  
Author(s):  
CLEMENS GRABMAYER

This paper presents a proof-theoretic observation about two kinds of proof systems for bisimilarity between cyclic term graphs.First we consider proof systems for demonstrating that μ term specifications of cyclic term graphs have the same tree unwinding. We establish a close connection between adaptations for μ terms over a general first-order signature of the coinductive axiomatisation of recursive type equivalence by Brandt and Henglein (Brandt and Henglein 1998) and of a proof system by Ariola and Klop (Ariola and Klop 1995) for consistency checking. We show that there exists a simple duality by mirroring between derivations in the former system and formalised consistency checks, which are called ‘consistency unfoldings', in the latter. This result sheds additional light on the axiomatisation of Brandt and Henglein: it provides an alternative soundness proof for the adaptation considered here.We then outline an analogous duality result that holds for a pair of similar proof systems for proving that equational specifications of cyclic term graphs are bisimilar.


Author(s):  
Joseph Y. Halpern

Causality plays a central role in the way people structure the world; we constantly seek causal explanations for our observations. But what does it even mean that an event C “actually caused” event E? The problem of defining actual causation goes beyond mere philosophical speculation. For example, in many legal arguments, it is precisely what needs to be established in order to determine responsibility. The philosophy literature has been struggling with the problem of defining causality since Hume. In this book, Joseph Halpern explores actual causality, and such related notions as degree of responsibility, degree of blame, and causal explanation. The goal is to arrive at a definition of causality that matches our natural language usage and is helpful, for example, to a jury deciding a legal case, a programmer looking for the line of code that cause some software to fail, or an economist trying to determine whether austerity caused a subsequent depression. Halpern applies and expands an approach to causality that he and Judea Pearl developed, based on structural equations. He carefully formulates a definition of causality, and building on this, defines degree of responsibility, degree of blame, and causal explanation. He concludes by discussing how these ideas can be applied to such practical problems as accountability and program verification.


2001 ◽  
Vol 8 (37) ◽  
Author(s):  
Ronald Cramer ◽  
Victor Shoup

We present several new and fairly practical public-key encryption schemes and prove them secure against adaptive chosen ciphertext attack. One scheme is based on Paillier's Decision Composite Residuosity (DCR) assumption, while another is based in the classical Quadratic Residuosity (QR) assumption. The analysis is in the standard cryptographic model, i.e., the security of our schemes does not rely on the Random Oracle model.<br /> <br />We also introduce the notion of a universal hash proof system. Essentially, this is a special kind of non-interactive zero-knowledge proof system for an NP language. We do not show that universal hash proof systems exist for all NP languages, but we do show how to construct very efficient universal hash proof systems for a general class of group-theoretic language membership problems.<br /> <br />Given an efficient universal hash proof system for a language with certain natural cryptographic indistinguishability properties, we show how to construct an efficient public-key encryption schemes secure against adaptive chosen ciphertext attack in the standard model. Our construction only uses the universal hash proof system as a primitive: no other primitives are required, although even more efficient encryption schemes can be obtained by using hash functions with appropriate collision-resistance properties. We show how to construct efficient universal hash proof systems for languages related to the DCR and QR assumptions. From these we get corresponding public-key encryption schemes that are secure under these assumptions. We also show that the Cramer-Shoup encryption scheme (which up until now was the only practical encryption scheme that could be proved secure against adaptive chosen ciphertext attack under a reasonable assumption, namely, the Decision Diffie-Hellman assumption) is also a special case of our general theory.


2016 ◽  
Vol 23 (3) ◽  
pp. 145-149
Author(s):  
Marek Żukowicz ◽  
Michał Markiewicz

Abstract The aim of the article is to present a mathematical definition of the object model, that is known in computer science as TreeList and to show application of this model for design evolutionary algorithm, that purpose is to generate structures based on this object. The first chapter introduces the reader to the problem of presenting data using the TreeList object. The second chapter describes the problem of testing data structures based on TreeList. The third one shows a mathematical model of the object TreeList and the parameters, used in determining the utility of structures created through this model and in evolutionary strategy, that generates these structures for testing purposes. The last chapter provides a brief summary and plans for future research related to the algorithm presented in the article.


Sign in / Sign up

Export Citation Format

Share Document