Mechanizing proofs with logical relations – Kripke-style

2018 ◽  
Vol 28 (9) ◽  
pp. 1606-1638 ◽  
Author(s):  
ANDREW CAVE ◽  
BRIGITTE PIENTKA

Proofs with logical relations play a key role to establish rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe two case studies using the proof environmentBeluga: First, we explain the mechanization of the weak normalization proof for the simply typed lambda-calculus; second, we outline how to mechanize the completeness proof of algorithmic equality for simply typed lambda-terms where we reason about logically equivalent terms. The development of these proofs inBelugarelies on three key ingredients: (1) we encode lambda-terms together with their typing rules, operational semantics, algorithmic and declarative equality using higher order abstract syntax (HOAS) thereby avoiding the need to manipulate and deal with binders, renaming and substitutions, (2) we take advantage ofBeluga's support for representing derivations that depend on assumptions and first-class contexts to directly state inductive properties such as logical relations and inductive proofs, (3) we exploitBeluga's rich equational theory for simultaneous substitutions; as a consequence, users do not need to establish and subsequently use substitution properties, and proofs are not cluttered with references to them. We believe these examples demonstrate thatBelugaprovides the right level of abstractions and primitives to mechanize challenging proofs using HOAS encodings. It also may serve as a valuable benchmark for other proof environments.

2008 ◽  
Vol 18 (3) ◽  
pp. 501-553 ◽  
Author(s):  
DAVID SABEL ◽  
MANFRED SCHMIDT-SCHAUSS

We present a higher-order call-by-need lambda calculus enriched with constructors, case expressions, recursive letrec expressions, a seq operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in the form of a single-step rewriting system that defines a (non-deterministic) normal-order reduction. This strategy can be made fair by adding resources for book-keeping. As equational theory, we use contextual equivalence (that is, terms are equal if, when plugged into any program context, their termination behaviour is the same), in which we use a combination of may- and must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations with respect to normal-order reduction are the same as for fair normal-order reduction. We develop a number of proof tools for proving correctness of program transformations. In particular, we prove a context lemma for both may- and must- convergence that restricts the number of contexts that need to be examined for proving contextual equivalence. Combining this with so-called complete sets of commuting and forking diagrams, we show that all the deterministic reduction rules and some additional transformations preserve contextual equivalence. We also prove a standardisation theorem for fair normal-order reduction. The structure of the ordering ≤c is also analysed, and we show that Ω is not a least element and ≤c already implies contextual equivalence with respect to may-convergence.


2000 ◽  
Vol 7 (34) ◽  
Author(s):  
Olivier Danvy ◽  
Morten Rhiger

<p>We present a simple way to program typed abstract syntax in a <br />language following a Hindley-Milner typing discipline, such as Haskell and ML, and we apply it to automate two proofs about normalization functions as embodied in type-directed partial evaluation for the simply typed lambda calculus: normalization functions (1) preserve types and (2) yield long beta-eta normal forms.</p><p>Keywords: Type-directed partial evaluation, normalization functions, simply-typed lambda-calculus, higher-order abstract syntax, Haskell.</p>


2018 ◽  
Vol 29 (8) ◽  
pp. 1309-1343 ◽  
Author(s):  
ALBERTO MOMIGLIANO ◽  
BRIGITTE PIENTKA ◽  
DAVID THIBODEAU

Bisimulation proofs play a central role in programming languages in establishing rich properties such as contextual equivalence. They are also challenging to mechanize, since they require a combination of inductive and coinductive reasoning on open terms. In this paper, we describe mechanizing the property that similarity in the call-by-name lambda calculus is a pre-congruence using Howe’s method in the Beluga formal reasoning system. The development relies on three key ingredients: (1) we give a higher order abstract syntax (HOAS) encoding of lambda terms together with their operational semantics as intrinsically typed terms, thereby avoiding not only the need to deal with binders, renaming and substitutions, but keeping all typing invariants implicit; (2) we take advantage of Beluga’s support for representing open terms using built-in contexts and simultaneous substitutions: this allows us to directly state central definitions such as open simulation without resorting to the usual inductive closure operation and to encode very elegantly notoriously painful proofs such as the substitutivity of the Howe relation; (3) we exploit the possibility of reasoning by coinduction in Beluga’s reasoning logic. The end result is succinct and elegant, thanks to the high-level abstractions and primitives Beluga provides. We believe that this mechanization is a significant example that illustrates Beluga’s strength at mechanizing challenging (co)inductive proofs using HOAS encodings.


10.29007/3n54 ◽  
2018 ◽  
Author(s):  
Thomas Icard ◽  
Lawrence Moss

This paper adds monotonicity and antitonicity information to the typed lambda calculus, thereby providing a foundation for the Monotonicity Calculus first developed by van Benthem and others. We establish properties of the type system, propose a syntax, semantics, and proof calculus, and prove completeness for the calculus with respect to hierarchies of monotone and antitone functions over base preorders.


2002 ◽  
Vol 9 (49) ◽  
Author(s):  
Mikkel Nygaard ◽  
Glynn Winskel

A small but powerful language for higher-order nondeterministic processes is introduced. Its roots in a linear domain theory for concurrency are sketched though for the most part it lends itself to a more operational account. The language can be viewed as an extension of the lambda calculus with a ``prefixed sum'', in which types express the form of computation path of which a process is capable. Its operational semantics, bisimulation, congruence properties and expressive power are explored; in particular, it is shown how it can directly encode process languages such as CCS, CCS with process passing, and mobile ambients with public names.


2007 ◽  
Vol 14 (4) ◽  
Author(s):  
Kristian Støvring ◽  
Søren B. Lassen

We present a new co-inductive syntactic theory, eager normal form bisimilarity, for the untyped call-by-value lambda calculus extended with continuations and mutable references.<br /> <br />We demonstrate that the associated bisimulation proof principle is easy to use and that it is a powerful tool for proving equivalences between recursive imperative higher-order programs.<br /> <br />The theory is modular in the sense that eager normal form bisimilarity for each of the calculi extended with continuations and/or mutable references is a fully abstract extension of eager normal form bisimilarity for its sub-calculi. For each calculus, we prove that eager normal form bisimilarity is a congruence and is sound with respect to contextual equivalence. Furthermore, for the calculus with both continuations and mutable references, we show that eager normal form bisimilarity is complete: it coincides with contextual equivalence.


2003 ◽  
Vol 10 (43) ◽  
Author(s):  
Mikkel Nygaard ◽  
Glynn Winskel

A simple domain theory for concurrency is presented. Based on a categorical model of linear logic and associated comonads, it highlights the role of linearity in concurrent computation. Two choices of comonad yield two expressive metalanguages for higher-order processes, both arising from canonical constructions in the model. Their denotational semantics are fully abstract with respect to contextual equivalence. One language derives from an exponential of linear logic; it supports a straightforward operational semantics with simple proofs of soundness and adequacy. The other choice of comonad yields a model of affine-linear logic, and a process language with a tensor operation to be understood as a parallel composition of independent processes. The domain theory can be generalised to presheaf models, providing a more refined treatment of nondeterministic branching. The article concludes with a discussion of a broader programme of research, towards a fully fledged domain theory for concurrency.


2013 ◽  
Vol 23 (6) ◽  
pp. 658-700
Author(s):  
MATTHEW R. LAKIN ◽  
ANDREW M. PITTS

AbstractCorrect handling of names and binders is an important issue in meta-programming. This paper presents an embedding of constraint logic programming into the αML functional programming language, which provides a provably correct means of implementing proof search computations over inductive definitions involving names and binders modulo α-equivalence. We show that the execution of proof search in the αML operational semantics is sound and complete with regard to the model-theoretic semantics of formulae, and develop a theory of contextual equivalence for the subclass of αML expressions which correspond to inductive definitions and formulae. In particular, we prove that αML expressions, which denote inductive definitions, are contextually equivalent precisely when those inductive definitions have the same model-theoretic semantics. This paper is a revised and extended version of the conference paper (Lakin, M. R. & Pitts, A. M. (2009) Resolving inductive definitions with binders in higher-order typed functional programming. InProceedings of the 18th European Symposium on Programming (ESOP 2009), Castagna, G. (ed), Lecture Notes in Computer Science, vol. 5502. Berlin, Germany: Springer-Verlag, pp. 47–61) and draws on material from the first author's PhD thesis (Lakin, M. R. (2010)An Executable Meta-Language for Inductive Definitions with Binders. University of Cambridge, UK).


1996 ◽  
Vol 3 (31) ◽  
Author(s):  
Ian Stark

The nu-calculus of Pitts and Stark is a typed lambda-calculus, extended with state in the form of dynamically-generated names. These names can be created locally, passed around, and compared with one another. Through the interaction between names and functions, the language can capture notions of scope, visibility and sharing. Originally motivated by the study of references in Standard ML, the nu-calculus has connections to other kinds of local declaration, and to the mobile processes of the pi-calculus.<br /> <br />This paper introduces a logic of equations and relations which allows one to reason about expressions of the nu-calculus: this uses a simple representation of the private and public scope of names, and allows straightforward proofs of contextual equivalence (also known as observational, or observable, equivalence). The logic is based on earlier operational techniques, providing the same power but in a much more accessible form. In particular it allows intuitive and direct proofs of all contextual equivalences between first-order functions with local names.<br /><br />See the revised version BRICS-RS-97-39.


Author(s):  
Marcelo Fiore ◽  
Philip Saville

AbstractThe glueing construction, defined as a certain comma category, is an important tool for reasoning about type theories, logics, and programming languages. Here we extend the construction to accommodate ‘2-dimensional theories’ of types, terms between types, and rewrites between terms. Taking bicategories as the semantic framework for such systems, we define the glueing bicategory and establish a bicategorical version of the well-known construction of cartesian closed structure on a glueing category. As an application, we show that free finite-product bicategories are fully complete relative to free cartesian closed bicategories, thereby establishing that the higher-order equational theory of rewriting in the simply-typed lambda calculus is a conservative extension of the algebraic equational theory of rewriting in the fragment with finite products only.


Sign in / Sign up

Export Citation Format

Share Document