Quantifier elimination and parametric polymorphism in programming languages

1992 ◽  
Vol 2 (2) ◽  
pp. 213-226 ◽  
Author(s):  
Harry G. Mairson

AbstractWe present a simple and easy-to-understand explanation of ML type inference and parametric polymorphism within the framework of type monomorphism, as in the first order typed lambda calculus. We prove the equivalence of this system with the standard interpretation using type polymorphism, and extend the equivalence to include polymorphic fixpoints. The monomorphic interpretation gives a purely combinatorial understanding of the type inference problem, and is a classic instance of quantifier elimination, as well as an example of Gentzen-style cut elimination in the framework of the Curry-Howard propositions-as-types analogy.

1994 ◽  
Vol 4 (4) ◽  
pp. 435-477 ◽  
Author(s):  
Fritz Henglein ◽  
Harry G. Mairson

AbstractWe analyse the computational complexity of type inference for untyped λ-terms in the second-order polymorphic typed λ-calculus (F2) invented by Girard and Reynolds, as well as higher-order extensions F3, F4, …, Fω proposed by Girard. We prove that recognising the F2-typable terms requires exponential time, and for Fω the problem is non-elementary. We show as well a sequence of lower bounds on recognising the Fk-typable terms, where the bound for Fk+1 is exponentially larger than that for Fk.The lower bounds are based on generic simulation of Turing Machines, where computation is simulated at the expression and type level simultaneously. Non-accepting computations are mapped to non-normalising reduction sequences, and hence non-typable terms. The accepting computations are mapped to typable terms, where higher-order types encode reduction sequences, and first-order types encode the entire computation as a circuit, based on a unification simulation of Boolean logic. A primary technical tool in this reduction is the composition of polymorphic functions having different domains and ranges.These results are the first nontrivial lower bounds on type inference for the Girard/Reynolds system as well as its higher-order extensions. We hope that the analysis provides important combinatorial insights which will prove useful in the ultimate resolution of the complexity of the type inference problem.


2005 ◽  
Vol 12 (12) ◽  
Author(s):  
Malgorzata Biernacka ◽  
Olivier Danvy ◽  
Kristian Støvring

We formalize two proofs of weak head normalization for the simply typed lambda-calculus in first-order minimal logic: one for normal-order reduction, and one for applicative-order reduction in the object language. Subsequently we use Kreisel's modified realizability to extract evaluation algorithms from the proofs, following Berger; the proofs are based on Tait-style reducibility predicates, and hence the extracted algorithms are instances of (weak head) normalization by evaluation, as already identified by Coquand and Dybjer.


1998 ◽  
Vol 8 (5) ◽  
pp. 527-536 ◽  
Author(s):  
PATRIK JANSSON ◽  
JOHAN JEURING

Unification, or two-way pattern matching, is the process of solving an equation involving two first-order terms with variables. Unification is used in type inference in many programming languages and in the execution of logic programs. This means that unification algorithms have to be written over and over again for different term types. Many other functions also make sense for a large class of datatypes; examples are pretty printers, equality checks, maps etc. They can be defined by induction on the structure of user-defined datatypes. Implementations of these functions for different datatypes are closely related to the structure of the datatypes. We call such functions polytypic. This paper describes a unification algorithm parametrised on the type of the terms, and shows how to use polytypism to obtain a unification algorithm that works for all regular term types.


1996 ◽  
Vol 3 (31) ◽  
Author(s):  
Ian Stark

The nu-calculus of Pitts and Stark is a typed lambda-calculus, extended with state in the form of dynamically-generated names. These names can be created locally, passed around, and compared with one another. Through the interaction between names and functions, the language can capture notions of scope, visibility and sharing. Originally motivated by the study of references in Standard ML, the nu-calculus has connections to other kinds of local declaration, and to the mobile processes of the pi-calculus.<br /> <br />This paper introduces a logic of equations and relations which allows one to reason about expressions of the nu-calculus: this uses a simple representation of the private and public scope of names, and allows straightforward proofs of contextual equivalence (also known as observational, or observable, equivalence). The logic is based on earlier operational techniques, providing the same power but in a much more accessible form. In particular it allows intuitive and direct proofs of all contextual equivalences between first-order functions with local names.<br /><br />See the revised version BRICS-RS-97-39.


10.29007/22x6 ◽  
2018 ◽  
Author(s):  
Sylvia Grewe ◽  
Sebastian Erdweg ◽  
Mira Mezini

Type systems for programming languages shall detect type errors in programs before runtime. To ensure that a type system meets this requirement, its soundness must be formally verified. We aim at automating soundness proofs of type systems to facilitate the development of sound type systems for domain-specific languages.Soundness proofs for type systems typically require induction. However, many of the proofs of individual induction cases only require first-order reasoning. For the development of our workbench Veritas, we build on this observation by combining automated first-order theorem provers such as Vampire with automated proof strategies specific to type systems. In this paper, we describe how we encode type soundness proofs in first-order logic using TPTP. We show how we use Vampire to prove the soundness of type systems for the simply-typed lambda calculus and for parts of a typed SQL. We report on which parts of the proofs are handled well by Vampire, and what parts work less well with our current approach.


2009 ◽  
Vol 19 (4) ◽  
pp. 639-686 ◽  
Author(s):  
RASMUS EJLERS MØGELBERG

This paper shows how PILLY(Polymorphic Intuitionistic/Linear Lambda calculus with a fixed point combinatorY) with parametric polymorphism can be used as a metalanguage for domain theory, as originally suggested by Plotkin more than a decade ago. Using Plotkin's encodings of recursive types in PILLY, we show how parametric models of PILLYgive rise to models of FPC, which is a simply typed lambda calculus with recursive types and an operational call-by-value semantics, reflecting a classical result from domain theory. Essentially, this interpretation is an interpretation of intuitionistic logic into linear logic first discovered by Girard, which in this paper is extended to deal with recursive types. Of particular interest is a model based on ‘admissible’ pers over a reflexive domain, the theory of which can be seen as a domain theory for (impredicative) polymorphism. We show how this model gives rise to a parametric and computationally adequate model of PolyFPC, an extension of FPC with impredicative polymorphism. This is to the author's knowledge the first denotational model of a non-linear language with parametric polymorphism and recursive types.


1991 ◽  
Vol 1 (2) ◽  
pp. 125-154 ◽  
Author(s):  
Henk Barendregt

AbstractProgramming languages often come with type systems. Some of these are simple, others are sophisticated. As a stylistic representation of types in programming languages several versions of typed lambda calculus are studied. During the last 20 years many of these systems have appeared, so there is some need of classification. Working towards a taxonomy, Barendregt (1991) gives a fine-structure of the theory of constructions (Coquand and Huet 1988) in the form of a canonical cube of eight type systems ordered by inclusion. Berardi (1988) and Terlouw (1988) have independently generalized the method of constructing systems in the λ-cube. Moreover, Berardi (1988, 1990) showed that the generalized type systems are flexible enough to describe many logical systems. In that way the well-knownpropositions-as-typesinterpretation obtains a nice canonical form.


1992 ◽  
Vol 03 (03) ◽  
pp. 333-378 ◽  
Author(s):  
DAVID PYM

We present a unification algorithm for the λΠ-calculus, the lambda calculus with first-order dependent function types. Our algorithm is an extension of Huet’s algorithm for the simply-typed lambda calculus to first-order dependent function types. In the simply-typed lambda calculus one attempts to unify a pair of terms of the same type; in the λΠ-calculus types are dependent on terms so one must unify not only terms, but their types as well. Accordingly, we identify a suitable notion of similarity of types, and only attempt to unify a pair of terms of similar type: if the types are not similar then they are not unifiable. Since Huet’s algorithm does not enumerate all of the unifiers of a given pair of terms, the strategy of first unifying pairs of types — by identifying suitable pairs of subterms for unification — is not complete. Accordingly, we must unify terms and their types simultaneously, taking care to maintain all of the conditions that are necessary to ensure the well-formedness of the resulting calculated substitution. Our notion of substitution is an algebraic one: substitutions are morphisms of λΠ-contexts. However, in order to define our algorithm we must work with psubstitutions and pcontexts — substitutions and contexts, respectively, in which variables are replaced by terms of similar, not β(η)-equal, type.


10.29007/xtb8 ◽  
2018 ◽  
Author(s):  
Thierry Boy de La Tour

Two non deterministic algorithms for generalizing a solution of a constraint expressed in second order typed lambda-calculus are presented. One algorithm derives from the proof of completeness of the higher order unification rules by D. C. Jensen and T. Pietrzykowski, the other is abstracted from an algorithm by N. Peltier and the author for generalizing proofs. A framework is developed in which such constrained generalization algorithms can be designed, allowing a uniform presentation for the two algorithms. Their relative strength at generalization is then analyzed through some properties of interest: their behaviour on valid and first order constraints, or whether they may be iterated or composed.


Sign in / Sign up

Export Citation Format

Share Document