A UNIFICATION ALGORITHM FOR THE λΠ-CALCULUS

1992 ◽  
Vol 03 (03) ◽  
pp. 333-378 ◽  
Author(s):  
DAVID PYM

We present a unification algorithm for the λΠ-calculus, the lambda calculus with first-order dependent function types. Our algorithm is an extension of Huet’s algorithm for the simply-typed lambda calculus to first-order dependent function types. In the simply-typed lambda calculus one attempts to unify a pair of terms of the same type; in the λΠ-calculus types are dependent on terms so one must unify not only terms, but their types as well. Accordingly, we identify a suitable notion of similarity of types, and only attempt to unify a pair of terms of similar type: if the types are not similar then they are not unifiable. Since Huet’s algorithm does not enumerate all of the unifiers of a given pair of terms, the strategy of first unifying pairs of types — by identifying suitable pairs of subterms for unification — is not complete. Accordingly, we must unify terms and their types simultaneously, taking care to maintain all of the conditions that are necessary to ensure the well-formedness of the resulting calculated substitution. Our notion of substitution is an algebraic one: substitutions are morphisms of λΠ-contexts. However, in order to define our algorithm we must work with psubstitutions and pcontexts — substitutions and contexts, respectively, in which variables are replaced by terms of similar, not β(η)-equal, type.

2005 ◽  
Vol 12 (12) ◽  
Author(s):  
Malgorzata Biernacka ◽  
Olivier Danvy ◽  
Kristian Støvring

We formalize two proofs of weak head normalization for the simply typed lambda-calculus in first-order minimal logic: one for normal-order reduction, and one for applicative-order reduction in the object language. Subsequently we use Kreisel's modified realizability to extract evaluation algorithms from the proofs, following Berger; the proofs are based on Tait-style reducibility predicates, and hence the extracted algorithms are instances of (weak head) normalization by evaluation, as already identified by Coquand and Dybjer.


1996 ◽  
Vol 3 (31) ◽  
Author(s):  
Ian Stark

The nu-calculus of Pitts and Stark is a typed lambda-calculus, extended with state in the form of dynamically-generated names. These names can be created locally, passed around, and compared with one another. Through the interaction between names and functions, the language can capture notions of scope, visibility and sharing. Originally motivated by the study of references in Standard ML, the nu-calculus has connections to other kinds of local declaration, and to the mobile processes of the pi-calculus.<br /> <br />This paper introduces a logic of equations and relations which allows one to reason about expressions of the nu-calculus: this uses a simple representation of the private and public scope of names, and allows straightforward proofs of contextual equivalence (also known as observational, or observable, equivalence). The logic is based on earlier operational techniques, providing the same power but in a much more accessible form. In particular it allows intuitive and direct proofs of all contextual equivalences between first-order functions with local names.<br /><br />See the revised version BRICS-RS-97-39.


1992 ◽  
Vol 2 (2) ◽  
pp. 213-226 ◽  
Author(s):  
Harry G. Mairson

AbstractWe present a simple and easy-to-understand explanation of ML type inference and parametric polymorphism within the framework of type monomorphism, as in the first order typed lambda calculus. We prove the equivalence of this system with the standard interpretation using type polymorphism, and extend the equivalence to include polymorphic fixpoints. The monomorphic interpretation gives a purely combinatorial understanding of the type inference problem, and is a classic instance of quantifier elimination, as well as an example of Gentzen-style cut elimination in the framework of the Curry-Howard propositions-as-types analogy.


10.29007/xtb8 ◽  
2018 ◽  
Author(s):  
Thierry Boy de La Tour

Two non deterministic algorithms for generalizing a solution of a constraint expressed in second order typed lambda-calculus are presented. One algorithm derives from the proof of completeness of the higher order unification rules by D. C. Jensen and T. Pietrzykowski, the other is abstracted from an algorithm by N. Peltier and the author for generalizing proofs. A framework is developed in which such constrained generalization algorithms can be designed, allowing a uniform presentation for the two algorithms. Their relative strength at generalization is then analyzed through some properties of interest: their behaviour on valid and first order constraints, or whether they may be iterated or composed.


2020 ◽  
Vol 4 (POPL) ◽  
pp. 1-27 ◽  
Author(s):  
Aloïs Brunel ◽  
Damiano Mazza ◽  
Michele Pagani

Author(s):  
Ernesto Copello ◽  
Nora Szasz ◽  
Álvaro Tasistro

Abstarct We formalize in Constructive Type Theory the Lambda Calculus in its classical first-order syntax, employing only one sort of names for both bound and free variables, and with α-conversion based upon name swapping. As a fundamental part of the formalization, we introduce principles of induction and recursion on terms which provide a framework for reproducing the use of the Barendregt Variable Convention as in pen-and-paper proofs within the rigorous formal setting of a proof assistant. The principles in question are all formally derivable from the simple principle of structural induction/recursion on concrete terms. We work out applications to some fundamental meta-theoretical results, such as the Church–Rosser Theorem and Weak Normalization for the Simply Typed Lambda Calculus. The whole development has been machine checked using the system Agda.


2013 ◽  
pp. 5-54
Author(s):  
Henk Barendregt ◽  
Wil Dekkers ◽  
Richard Statman

Sign in / Sign up

Export Citation Format

Share Document