Twenty Five Years of Constructive Type Theory
Latest Publications


TOTAL DOCUMENTS

15
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By Oxford University Press

9780198501275, 9780191916540

Author(s):  
Petri Mäenpää

This work proposes a new method of deriving programs from their specifications in constructive type theory: the method of analysis-synthesis. It is new as a mathematical method only in the area of programming methodology, as it is modelled upon the most successful and widespread method in the history of exact sciences. The method of analysis-synthesis, also known as the method of analysis, was devised by Ancient Greek mathematicians for solving geometric construction problems with ruler and compass. Its most important subsequent elaboration is Descartes’s algebraic method of analysis, which pervades all exact sciences today. The present work expands this method further into one that aims at systematizing program derivation in a heuristically useful way, analogously to the way Descartes’s method systematized the solution of geometric and arithmetical problems. To illustrate the method, we derive the Boyer-Moore algorithm for finding an element that has a majority of occurrences in a given list. It turns out that solving programming problems need not be too different from solving mathematical problems in general. This point of view has been emphasized in particular by Martin-Löf (1982) and Dijkstra (1986). The idea of a logic of problem solving originates in Kolmogorov (1932). We aim to refine the analogy between programming and mathematical problem solving by investigating the mathematical method of analysis in the context of programming. The central idea of the analytic method, in modern terms, is to analyze the functional dependencies between the constituents of a geometric configuration. The aim is to determine how the sought constituents depend on the given ones. A Greek analysis starts by drawing a diagram with the sought constructions drawn on the given ones, in the relation required by the problem specification. Then the sought constituents of the configuration are determined in terms of the given ones. Analysis was the Greeks’ method of discovering solutions to problems. Their method of justification was synthesis, which cast analysis into standard deductive form. First it constructed the sought objects from the given ones, and then demonstrated that they relate as required to the given ones. In his Geometry, Descartes developed Greek geometric analysis-synthesis into the modern algebraic method of analysis.


Author(s):  
Nicolaas Govert de Bruijn

After millennia of mathematics we have reached a level of understanding that can be represented physically. Humankind has managed to disentangle the intricate mixture of language, metalanguage and interpretation, isolating a body of formal, abstract mathematics that can be completely verified by machines. Systems for computer-aided verification have philosophical aspects. The design and usage of such systems are influenced by the way we think about mathematics, but it also works the other way. A number of aspects of this mutual influence will be discussed in this paper. In particular, attention will be given to philosophical aspects of type-theoretical systems. These definitely call for new attitudes: throughout the twentieth century most mathematicians had been trained to think in terms of untyped sets. The word “philosophy” will be used lightheartedly. It does not refer to serious professional philosophy, but just to meditation about the way one does one’s job. What used to be called philosophy of mathematics in the past was for a large part subject oriented. Most people characterized mathematics by its subject matter, classifying it as the science of space and number. From the verification system’s point of view, however, subject matter is irrelevant. Verification is involved with the rules of mathematical reasoning, not with the subject. The picture may be a bit confused, however, by the fact that so many people consider set theory, in particular untyped set theory, as part of the language and foundation of mathematics, rather than as a particular subject treated by mathematics. The views expressed in this paper are quite personal, and can mainly be carried back to the author’s design of the Automath system in the late 1960s, where the way to look upon the meaning (philosophy) of mathematics is inspired by the usage of the unification system and vice versa. See de Bruijn 1994b for various philosophical items concerning Automath, and Nederpelt et al. 1994, de Bruin 1980, de Bruijn 1991a for general information about the Automath project. Some of the points of view given in this paper are matters of taste, but most of them were imposed by the task of letting a machine follow what we say, a machine without any knowledge of our mathematical culture and without any knowledge of physical laws.


Author(s):  
Silvio Valentini

The aim of this paper is to give a simple but instructive example of the forget-restore principle, conceived by Giovanni Sambin as a discipline for a constructive development of mathematics and which first appeared in print in the introduction of Sambin and Valentini 1998. The best way to explain such a philosophical position is to quote from that paper: “To build up an abstract concept from a raw flow of data, one must disregard inessential details ... this is obtained by forgetting some information. To forget information is the same as to destroy something, in particular if there is no possibility of restoring that information ... our principle is that an abstraction is constructive ... when information ... is forgotten in such a way that it can be restored at will in any moment.” The example we want to show here refers to Martin-Löf’s intuitionistic type theory (just type theory from now on). We assume knowledge of the main peculiarities of type theory, as formulated in Martin-Löf 1984 or Nordström et al. 1990. Type theory is a logical calculus which adopts those notions and rules which keep total control of the amount of information contained in the different forms of judgement. However, type theory offers a way of “forgetting” information: that is, supposing A set, the form of judgement A true. The meaning of A true is that there exists an element a such that a ∈ A but it does not matter which particular element a is (see also the notion of proof irrelevance in de Bruijn 1980). Thus to pass from the judgement a ∈ A to the judgement A true is а clear example of the forgetting process. We will show that it is a constructive way to forget since, provided that there is a proof of the judgement A true, an element a such that a ∈ A can be reconstructed.


Author(s):  
Per Martin-Löf

The theory of types with which we shall be concerned is intended to be a full scale system for formalizing intuitionistic mathematics as developed, for example, in the book by Bishop 1967. The language of the theory is richer than the language of first order predicate logic. This makes it possible to strengthen the axioms for existence and disjunction. In the case of existence, the possibility of strengthening the usual elimination rule seems first to have been indicated by Howard 1969, whose proposed axioms are special cases of the existential elimination rule of the present theory. Furthermore, there is a reflection principle which links the generation of objects and types and plays somewhat the same role for the present theory as does the replacement axiom for Zermelo-Fraenkel set theory. An earlier, not yet conclusive, attempt at formulating a theory of this kind was made by Scott 1970. Also related, although less closely, are the type and logic free theories of constructions of Kreisel 1962 and 1965 and Goodman 1970. In its first version, the present theory was based on the strongly impredicative axiom that there is a type of all types whatsoever, which is at the same time a type and an object of that type. This axiom had to be abandoned, however, after it was shown to lead to a contradiction by Jean Yves Girard. I am very grateful to him for showing me his paradox. The change that it necessitated is so drastic that my theory no longer contains intuitionistic simple type theory as it originally did. Instead, its proof theoretic strength should be close to that of predicative analysis. Mathematical objects and their types. We shall think of mathematical objects or constructions. Every mathematical object is of a certain kind or type. Better, a mathematical object is always given together with its type, that is, it is not just an object, it is an object of a certain type.


Author(s):  
Stefano Baratella ◽  
Stefano Berardi

The aim of this paper is to provide a way of extracting the constructive content of a certain family of classical proofs directly from the proofs themselves. The paper itself is written in a purely constructive style. Our work is inspired by the game interpretations of classical logic due to Novikov (1943) and Coquand (1995). These interpretations date back to Gentzen (1969) and Bernays (1970) and were recently studied by Coquand (1995) who made use of technical tools developed by Novikov (1943). We will introduce an interpretation which is a short and compact description of the meaning assigned to classical formulas by Coquand’s interpretation. Contrary to Coquand, we will completely avoid any game terminology, by making use of the intuitionistic notion of continuous computation. A posteriori, our interpretation turns out to be related to Kreisel’s no-counterexample interpretation (Kreisel 1957) but, compared with his, it provides simpler constructive proofs. The reader is referred to Baratella and Berardi (1997) for a number of examples of constructive proofs provided by our interpretation that can be used for a comparison. Indeed, our interpretation is a fragment of Coquand’s that can be easily expanded to a variant of his. However, we claim that our interpretation suffices as long as we are only interested in the constructive meaning of classical formulas (whilst we need Coquand’s if we are interested in computations lying behind the constructive meaning). We will support this claim by proving, as the main result, that our interpretation is intuitionistically complete, in the same way as Coquand’s (Herbelin 1995). That is, we will intuitionistically prove that a formula is derivable in infmitary classical logic if and only if its interpretation holds. Since infinitary classical logic is classically complete, loosely speaking we can restate our result as follows: the classical truth of a classical formula is intuitionistically equivalent to the intuitionistic truth of the constructive interpretation of the formula. We also recall that Godel’s Dialectica interpretation is not intuitionistically complete (see section 7). In this regard, see also Berardi (1997). In addition, we point out that, contrary to the game interpretations, our interpretation is not a sort of reformulation of what is going on in the sequent calculus.


Author(s):  
Anton Setzer

The proof-theoretic strength α of a theory is the supremum of all ordinals up to which we can prove transfmite induction in that theory. Whereas for classical theories the main problem is to show that α is an upper bound for the strength—this usually means to reduce the theory to a weak theory like primitive recursive arithmetic or Heyting arithmetic extended by transfmite induction up to α, which can be considered to be more constructive than the classical theory itself—for constructive theories this is in most cases not difficult, since we can easily build a term model in a classical theory of known strength. For constructive theories in general the main problem is to show that α is a lower bound: that despite the restricted principles available one has a proof-theoretically strong theory. In this article we will concentrate on the direct method for showing that α is a lower bound, namely well-ordering proofs: to carry out in the theory a sequence of proofs of the well-foundedness of linear orderings of order type αn, such that supn∈ω αn = α. Such proofs can be considered to be the logically most complex proofs which one can carry out in the theory; in most cases, in addition to transfinite induction up to αn for each n, only primitive recursive arithmetic is needed in order to analyze the theory proof-theoretically and in order to prove the same Π02-sentences. Griffor and Rathjen (1994) have used the more indirect method of interpreting theories of known strength in type theory for obtaining lower bounds for the strength of it. Apart from the fact that in the case of one universe and W-type Griffor and Rathjens’ approach did not yield sharp bounds, we believe that the direct method has the advantage of giving a deeper insight into the theory, since one examines the principles of the theory directly without referring to the analysis of another theory, and that the programs obtained by it are of independent interest. In Setzer (1995) and Setzer (1996) we have carried out well-ordering proofs for Martin-Löf’s type theory with W-type and one universe and for the Mahlo universe.


Author(s):  
Giovanni Sambin ◽  
Silvio Valentini

Beginning in 1970, Per Martin-Löf has developed an intuitionistic type theory (hence-forth type theory for short) as a constructive alternative to the usual foundation of mathematics based on classical set theory. We assume the reader is aware at least of the main peculiarities of type theory, as formulated in Martin-Löf 1984 or Nordström et al. 1990; here we recall some of them to be able to introduce our point of view. The form of type theory is that of a logical calculus, where inference rules to derive judgements are at the same time set-theoretic constructions, because of the “propositions-as-sets” interpretation. The spirit of type theory—expressing our interpretation in a single sentence-—is to adopt those notions and rules which keep total control of the amount of information contained in the different forms of judgement. We now briefly justify this claim. First of all, the judgement asserting the truth of a proposition A, which from an intuitionistic point of view means the existence of a verification of A, in type theory is replaced by the judgement a ∈ A which explicitly exhibits a verification a of A. In fact, it would be unwise, for a constructivist, to throw away the specific verification of A which must be known to be able to assert the existence of a verification! The judgement that A is a set, which from an intuitionistic point of view means that there exists an inductive presentation of A, is treated in type theory in a quite similar way (even if in this case no notation analogous to a ∈ A is used) since the judgement A set in type theory becomes explicit knowledge of the specific inductive presentation of A. In fact, the rules for primitive types and for type constructors are so devised that whenever a judgement A set is proved, it means that one also has complete information on the rules which describe how canonical elements of A are formed. Such a property, which might look like a peculiarity of type theory, is as a matter of fact necessary in order to give a coherent constructive treatment of quantifiers.


Author(s):  
Karim Nour

λ-calculus as such is not a computational model. A reduction strategy is needed. In this paper, we consider λ-calculus with the left reduction. This strategy has many advantages: it always terminates when applied to a normalizable λ-term and it seems more economic since we compute λ-term only when we need it. But the major drawback of this strategy is that a function must compute its argument every time it uses it. This is the reason why this strategy is not really used. In 1990 Krivine (1990b) introduced the notion of storage operators in order to avoid this problem and to simulate call-by-value when necessary. The AF2 type system is a way of interpreting the proof rules for second-order intuitionistic logic plus equational reasoning as construction rules for terms. Krivine (1990b) has shown that, by using Gödel translation from classical to intuitionistic logic (denoted byg), we can find in system AF2 a very simple type for storage operators. Historically the type was discovered before the notion of storage operator itself. Krivine (1990a) proved that as far as totality of functions is concerned second-order classical logic is conservative over second-order intuitionistic logic. To prove this, Krivine introduced the following notions: A[x] is an input (resp. output) data type if one can prove intuitionistically A[x] → Ag[x] (resp. Ag[x] → ⇁⇁A[x]). Then if A[x] is an input data type and B[x] is an output data type, then if one can prove A[x] → B[x] classically one can prove it intuitionistically. The notion of storage operator was discovered by investigating the property of all λ-terms of type Ng[x] → ⇁⇁N[x] where N[x] is the type of integers. Parigot (1992) and Krivine (1994) have extended the AF2 system to classical logic. The method of Krivine is very simple: it consists of adding a new constant, denoted by C, with the declaration С: ∀X{⇁⇁ X → X} which axiomatizes classical logic over intuitionistic logic. For the constant C, he adds a new reduction rule which is a particular case of a rule given by Felleisen (1987) for control operator.


Author(s):  
Catarina Coquand

The notion of computability predicate developed by Tail gives a powerful method for proving normalization for various λ-calculi. There are actually two versions of this method: a typed and an untyped approach. In the former approach we define a computability predicate over well-typed terms. This technique was developed by Tail (1967) in order to prove normalization for Gödel’s system T. His method was extended by Girard (1971) for his System F and by Martin-Löf (1971) for his type theory. The second approach is similar to Kleene’s realizability interpretation (Kleene 1945), but in this approach formulas are realized by (not necessarily well-typed) λ-terms rather than Gödel numbers. This technique was developed by Tail (1975) where he uses this method to obtain a simplified proof of normalization for Girard’s system F. The method was later rediscovered independently by several others, for instance Mitchell (1986). There are two ways of defining typed λ-calculus: we have either equality as conversion (on raw terms) or equality as judgments. It is more difficult to show in the theory with equality as judgments that every well-typed term has a normal form of the same type. We can find different approaches to how to solve this problem in Altenkirch (1993), Coquand (1991), Goguen (1994) and Martin-Löf (1975). There are several papers addressing normalization for calculi with equality as untyped conversion; two relevant papers showing normalization are Altenkirch’s (1994) proof for system F using the untyped method and Werner’s (1994) proof for the calculus of constructions with inductive types using the other method. Another difference is the predicative and impredicative formulations of λ-calculi where it is more intricate to prove normalization for the impredicative ones. The aim of this paper is to give a simple argument of normalization for a theory with “real” dependent types, i.e. a theory in which we can define types by recursion on the natural numbers (like Nn). For this we choose to study the simplest theory that has such types, namely the N, Π, U-fragment of Martin-Löf’s polymorphic type theory (Martin-Löf 1972) which contains the natural numbers, dependent functions and one universe. This theory has equality as conversion and is predicative.


Author(s):  
Jan Cederquist ◽  
Thierry Coquand

We present the basic concepts and definitions needed in a pointfree approach to functional analysis via formal topology. Our main results are the constructive proofs of localic formulations of the Alaoglu and Helly-Hahn-Banach theorems. Earlier pointfree formulations of the Hahn-Banach theorem, in a topos-theoretic setting, were presented by Mulvey and Pelletier (1987, 1991) and by Vermeulen (1986). A constructive proof based on points was given by Bishop (1967). In the formulation of his proof, the norm of the linear functional is preserved to an arbitrary degree by the extension and a counterexample shows that the norm, in general, is not preserved exactly. As usual in pointfree topology, our guideline is to define the objects under analysis as formal points of a suitable formal space. After this has been accomplished for the reals, we consider the formal topology ℒ(A) obtained as follows. To the formal space of mappings from a normed vector space A to the reals, we add the linearity and norm conditions in the form of covering axioms. The linear functional of norm ≤1 from A to the reals then correspond to the formal points of this formal topology. Given a subspace M of A, the classical Helly-Hahn-Banach theorem states that the restriction mapping from the linear functionals on A of norm ≤1 to those on M is surjective. In terms of covers, conceived as deductive systems, it becomes a conservativity statement (cf. Mulvey and Pelletier 1991): whenever a is an element and U is a subset of the base of the formal space ℒ(M) and we have a derivation in ℒ(A) of a ⊲ U, then we can find a derivation in ℒ(M) with the same conclusion. With this formulation it is quite natural to look for a proof by induction on covers. Moreover, as already pointed out by Mulvey and Pelletier (1991), it is possible to simplify the problem greatly, since it is enough to prove it for coherent spaces of which ℒ(A) and ℒ(M) are retracts. Then, in a derivation of a cover, we can assume that only finite subsets occur on the right-hand side of the cover relation.


Sign in / Sign up

Export Citation Format

Share Document