A realizability interpretation of Martin-Löf’s type Theory

Author(s):  
Catarina Coquand

The notion of computability predicate developed by Tail gives a powerful method for proving normalization for various λ-calculi. There are actually two versions of this method: a typed and an untyped approach. In the former approach we define a computability predicate over well-typed terms. This technique was developed by Tail (1967) in order to prove normalization for Gödel’s system T. His method was extended by Girard (1971) for his System F and by Martin-Löf (1971) for his type theory. The second approach is similar to Kleene’s realizability interpretation (Kleene 1945), but in this approach formulas are realized by (not necessarily well-typed) λ-terms rather than Gödel numbers. This technique was developed by Tail (1975) where he uses this method to obtain a simplified proof of normalization for Girard’s system F. The method was later rediscovered independently by several others, for instance Mitchell (1986). There are two ways of defining typed λ-calculus: we have either equality as conversion (on raw terms) or equality as judgments. It is more difficult to show in the theory with equality as judgments that every well-typed term has a normal form of the same type. We can find different approaches to how to solve this problem in Altenkirch (1993), Coquand (1991), Goguen (1994) and Martin-Löf (1975). There are several papers addressing normalization for calculi with equality as untyped conversion; two relevant papers showing normalization are Altenkirch’s (1994) proof for system F using the untyped method and Werner’s (1994) proof for the calculus of constructions with inductive types using the other method. Another difference is the predicative and impredicative formulations of λ-calculi where it is more intricate to prove normalization for the impredicative ones. The aim of this paper is to give a simple argument of normalization for a theory with “real” dependent types, i.e. a theory in which we can define types by recursion on the natural numbers (like Nn). For this we choose to study the simplest theory that has such types, namely the N, Π, U-fragment of Martin-Löf’s polymorphic type theory (Martin-Löf 1972) which contains the natural numbers, dependent functions and one universe. This theory has equality as conversion and is predicative.

2014 ◽  
Vol 24 (2-3) ◽  
pp. 316-383 ◽  
Author(s):  
PIERRE-ÉVARISTE DAGAND ◽  
CONOR McBRIDE

AbstractProgramming with dependent types is a blessing and a curse. It is a blessing to be able to bake invariants into the definition of datatypes: We can finally write correct-by-construction software. However, this extreme accuracy is also a curse: A datatype is the combination of a structuring medium together with a special purpose logic. These domain-specific logics hamper any attempt to reuse code across similarly structured data. In this paper, we capitalise on the structural invariants of datatypes. To do so, we first adapt the notion of ornament to our universe of inductive families. We then show how code reuse can be achieved by ornamenting functions. Using these functional ornaments, we capture the relationship between functions such as the addition of natural numbers and the concatenation of lists. With this knowledge, we demonstrate how the implementation of the former informs the implementation of the latter: The users can ask the definition of addition to be lifted to lists and they will only be asked the details necessary to carry on adding lists rather than numbers. Our presentation is formalised in the type theory with a universe of datatypes and all our constructions have been implemented as generic programs, requiring no extension to the type theory.


Author(s):  
ANDREAS ROSSBERG

AbstractML is two languages in one: there is the core, with types and expressions, and there are modules, with signatures, structures, and functors. Modules form a separate, higher-order functional language on top of the core. There are both practical and technical reasons for this stratification; yet, it creates substantial duplication in syntax and semantics, and it imposes seemingly unnecessary limits on expressiveness because it makes modules second-class citizens of the language. For example, selecting one among several possible modules implementing a given interface cannot be made a dynamic decision. Language extensions allowing modules to be packaged up as first-class values have been proposed and implemented in different variations. However, they remedy expressiveness only to some extent and tend to be even more syntactically heavyweight than using second-class modules alone. We propose a redesign of ML in which modules are truly first-class values, and core and module layers are unified into one language. In this “1ML”, functions, functors, and even type constructors are one and the same construct; likewise, no distinction is needed between structures, records, or tuples. Or viewed the other way round, everything is just (“a mode of use of”) modules. Yet, 1ML does not require dependent types: its type structure is expressible in terms of plain System Fω, with a minor variation of our F-ing modules approach. We introduce both an explicitly typed version of 1ML and an extension with Damas–Milner-style implicit quantification. Type inference for this language is not complete, but, we argue, not substantially worse than for Standard ML.


2009 ◽  
Vol 19 (5) ◽  
pp. 545-579 ◽  
Author(s):  
SHIN-CHENG MU ◽  
HSIANG-SHANG KO ◽  
PATRIK JANSSON

AbstractRelational program derivation is the technique of stepwise refining a relational specification to a program by algebraic rules. The program thus obtained is correct by construction. Meanwhile, dependent type theory is rich enough to express various correctness properties to be verified by the type checker. We have developed a library, AoPA (Algebra of Programming in Agda), to encode relational derivations in the dependently typed programming language Agda. A program is coupled with an algebraic derivation whose correctness is guaranteed by the type system. Two non-trivial examples are presented: an optimisation problem and a derivation of quicksort in which well-founded recursion is used to model terminating hylomorphisms in a language with inductive types.


Author(s):  
AARON STUMP ◽  
PENG FU

AbstractThis paper proposes a new typed lambda-encoding for inductive types which, for Peano numerals, has the expected time complexities for basic operations like addition and multiplication, has a constant-time predecessor function, and requires only quadratic space to encode a numeral. This improves on the exponential space required by the Parigot encoding. Like the Parigot encoding, the new encoding is typable in System F-omega plus positive-recursive type definitions, a total type theory. The new encoding is compared with previous ones through a significant case study: mergesort using Braun trees. The practical runtime efficiency of the new encoding, and the Church and Parigot encodings, are compared by two translations, one to Racket and one to Haskell, on a small suite of benchmarks.


1995 ◽  
Vol 60 (1) ◽  
pp. 178-190 ◽  
Author(s):  
M. Randall Holmes

AbstractAn ω-model (a model in which all natural numbers are standard) of the predicative fragment of Quine's set theory “New Foundations” (NF) is constructed. Marcel Crabbé has shown that a theory NFI extending predicative NF is consistent, and the model constructed is actually a model of NFI as well. The construction follows the construction of ω-models of NFU (NF with urelements) by R. B. Jensen, and, like the construction of Jensen for NFU, it can be used to construct α-models for any ordinal α. The construction proceeds via a model of a type theory of a peculiar kind; we first discuss such “tangled type theories” in general, exhibiting a “tangled type theory” (and also an extension of Zermelo set theory with Δ0 comprehension) which is equiconsistent with NF (for which the consistency problem seems no easier than the corresponding problem for NF (still open)), and pointing out that “tangled type theory with urelements” has a quite natural interpretation, which seems to provide an explanation for the more natural behaviour of NFU relative to the other set theories of this kind, and can be seen anachronistically as underlying Jensen's consistency proof for NFU.


2021 ◽  
Vol 31 ◽  
Author(s):  
ANDREA VEZZOSI ◽  
ANDERS MÖRTBERG ◽  
ANDREAS ABEL

Abstract Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system. However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality. These principles are typically added axiomatically which disrupts the constructive properties of these systems. Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types (HITs). This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a full-blown proof assistant with native support for univalence and a general schema of HITs. These new primitives allow the direct definition of function and propositional extensionality as well as quotient types, all with computational content. Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types. The adoption of cubical type theory extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity.


2007 ◽  
Vol 16 (07n08) ◽  
pp. 1982-1987
Author(s):  
◽  
N. N. AJITANAND

Recent experimental investigations have focussed on the abnormal spatial distribution of away side jet fragments as signals of significant medium induced effects. A variety of theoretical models including recent string-theory based efforts have supported the notion of Mach Cone like effects in the low viscosity QGP fluid. However, the presence of significant flow fields may deflect the fragmentation direction producing a significantly differing type of jet topology from that of the Mach cone. Three particle correlation functions constitute a powerful method whereby the predominance of one or the other type of mechanism can be differentiated. In this work the use of such an approach will be demonstrated via simulations and the results of its application to RHIC data will be presented.


2019 ◽  
Vol 29 (4) ◽  
pp. 419-468
Author(s):  
Henning Basold ◽  
Helle Hvid Hansen

Abstract We define notions of well-definedness and observational equivalence for programs of mixed inductive and coinductive types. These notions are defined by means of tests formulas which combine structural congruence for inductive types and modal logic for coinductive types. Tests also correspond to certain evaluation contexts. We define a program to be well-defined if it is strongly normalizing under all tests, and two programs are observationally equivalent if they satisfy the same tests. We show that observational equivalence is sufficiently coarse to ensure that least and greatest fixed point types are initial algebras and final coalgebras, respectively. This yields inductive and coinductive proof principles for reasoning about program behaviour. On the other hand, we argue that observational equivalence does not identify too many terms, by showing that tests induce a topology that, on streams, coincides with usual topology induced by the prefix metric. As one would expect, observational equivalence is, in general, undecidable, but in order to develop some practically useful heuristics we provide coinductive techniques for establishing observational normalization and observational equivalence, along with up-to techniques for enhancing these methods.


2021 ◽  
Vol 21 ◽  
pp. 273-294
Author(s):  
Gabriele Baratelli ◽  

The paper is divided into two parts. In the first one, I set forth a hypothesis to explain the failure of Husserl’s project presented in the Philosophie der Arithmetik based on the principle that the entire mathematical science is grounded in the concept of cardinal number. It is argued that Husserl’s analysis of the nature of the symbols used in the decadal system forces the rejection of this principle. In the second part, I take into account Husserl’s explanation of why, albeit independent of natural numbers, the system is nonetheless correct. It is shown that its justification involves, on the one hand, a new conception of symbols and symbolic thinking, and on the other, the recognition of the question of “the formal” and formalization as pivotal to understand “the mathematical” overall.


1982 ◽  
Vol 25 (4) ◽  
pp. 487-490
Author(s):  
Gerd Rodé

AbstractThis paper gives a new characterization of the dimension of a normal Hausdorff space, which joins together the Eilenberg-Otto characterization and the characterization by finite coverings. The link is furnished by the notion of a system of faces of a certain type (N1,..., NK), where N1,..., NK, K are natural numbers. It is shown that a space X contains a system of faces of type (N1,..., NK) if and only if dim(X) ≥ N1 + … + NK. The two limit cases of the theorem, namely Nk = 1 for 1 ≤ k ≤ K on the one hand, and K = 1 on the other hand, give the two known results mentioned above.


Sign in / Sign up

Export Citation Format

Share Document