Proof theory

Author(s):  
Wilfried Sieg

Proof theory is a branch of mathematical logic founded by David Hilbert around 1920 to pursue Hilbert’s programme. The problems addressed by the programme had already been formulated, in some sense, at the turn of the century, for example, in Hilbert’s famous address to the First International Congress of Mathematicians in Paris. They were closely connected to the set-theoretic foundations for analysis investigated by Cantor and Dedekind – in particular, to difficulties with the unrestricted notion of system or set; they were also related to the philosophical conflict with Kronecker on the very nature of mathematics. At that time, the central issue for Hilbert was the ‘consistency of sets’ in Cantor’s sense. Hilbert suggested that the existence of consistent sets, for example, the set of real numbers, could be secured by proving the consistency of a suitable, characterizing axiom system, but indicated only vaguely how to give such proofs model-theoretically. Four years later, Hilbert departed radically from these indications and proposed a novel way of attacking the consistency problem for theories. This approach required, first of all, a strict formalization of mathematics together with logic; then, the syntactic configurations of the joint formalism would be considered as mathematical objects; finally, mathematical arguments would be used to show that contradictory formulas cannot be derived by the logical rules. This two-pronged approach of developing substantial parts of mathematics in formal theories (set theory, second-order arithmetic, finite type theory and still others) and of proving their consistency (or the consistency of significant sub-theories) was sharpened in lectures beginning in 1917 and then pursued systematically in the 1920s by Hilbert and a group of collaborators including Paul Bernays, Wilhelm Ackermann and John von Neumann. In particular, the formalizability of analysis in a second-order theory was verified by Hilbert in those very early lectures. So it was possible to focus on the second prong, namely to establish the consistency of ‘arithmetic’ (second-order number theory and set theory) by elementary mathematical, ‘finitist’ means. This part of the task proved to be much more recalcitrant than expected, and only limited results were obtained. That the limitation was inevitable was explained in 1931 by Gödel’s theorems; indeed, they refuted the attempt to establish consistency on a finitist basis – as soon as it was realized that finitist considerations could be carried out in a small fragment of first-order arithmetic. This led to the formulation of a general reductive programme. Gentzen and Gödel made the first contributions to this programme by establishing the consistency of classical first-order arithmetic – Peano arithmetic (PA) – relative to intuitionistic arithmetic – Heyting arithmetic. In 1936 Gentzen proved the consistency of PA relative to a quantifier-free theory of arithmetic that included transfinite recursion up to the first epsilon number, ε0; in his 1941 Yale lectures, Gödel proved the consistency of the same theory relative to a theory of computable functionals of finite type. These two fundamental theorems turned out to be most important for subsequent proof-theoretic work. Currently it is known how to analyse, in Gentzen’s style, strong subsystems of second-order arithmetic and set theory. The first prong of proof-theoretic investigations, the actual formal development of parts of mathematics, has also been pursued – with a surprising result: the bulk of classical analysis can be developed in theories that are conservative over (fragments of) first-order arithmetic.

1983 ◽  
Vol 48 (4) ◽  
pp. 1013-1034
Author(s):  
Piergiorgio Odifreddi

We conclude here the treatment of forcing in recursion theory begun in Part I and continued in Part II of [31]. The numbering of sections is the continuation of the numbering of the first two parts. The bibliography is independent.In Part I our language was a first-order language: the only set we considered was the (set constant for the) generic set. In Part II a second-order language was introduced, and we had to interpret the second-order variables in some way. What we did was to consider the ramified analytic hierarchy, defined by induction as:A0 = {X ⊆ ω: X is arithmetic},Aα+1 = {X ⊆ ω: X is definable (in 2nd order arithmetic) over Aα},Aλ = ⋃α<λAα (λ limit),RA = ⋃αAα.We then used (a relativized version of) the fact that (Kleene [27]). The definition of RA is obviously modeled on the definition of the constructible hierarchy introduced by Gödel [14]. For this we no longer work in a language for second-order arithmetic, but in a language for (first-order) set theory with membership as the only nonlogical relation:L0 = ⊘,Lα+1 = {X: X is (first-order) definable over Lα},Lλ = ⋃α<λLα (λ limit),L = ⋃αLα.


1988 ◽  
Vol 53 (2) ◽  
pp. 338-348 ◽  
Author(s):  
Wilfried Sieg

On June 4, 1925, Hilbert delivered an address to the Westphalian Mathematical Society in Miinster; that was, as a quick calculation will convince you, almost exactly sixty years ago. The address was published in 1926 under the title Über das Unendliche and is perhaps Hilbert's most comprehensive presentation of his ideas concerning the finitist justification of classical mathematics and the role his proof theory was to play in it. But what has become of the ambitious program for securing all of mathematics, once and for all? What of proof theory, the very subject Hilbert invented to carry out his program? The Hilbertian ambition reached out too far: in its original form, the program was refuted by Gödel's Incompleteness Theorems. And even allowing more than finitist means in metamathematics, the Hilbertian expectations for proof theory have not been realized: a constructive consistency proof for second-order arithmetic is still out of reach. (And since that theory provides a formal framework for analysis, it was considered by Hilbert and Bernays as decisive for proof theory.) Nevertheless, remarkable progress has been made. Two separate, but complementary directions of research have led to surprising insights: classical analysis can be formally developed in conservative extensions of elementary number theory; relative consistency proofs can be given by constructive means for impredicative parts of second order arithmetic. The mathematical and metamathematical developments have been accompanied by sustained philosophical reflections on the foundations of mathematics. This indicates briefly the main themes of the contributions to the symposium; in my introductory remarks I want to give a very schematic perspective, that is partly historical and partly systematic.


1986 ◽  
Vol 51 (2) ◽  
pp. 377-386 ◽  
Author(s):  
C. Ward Henson ◽  
H. Jerome Keisler

It is often asserted in the literature that any theorem which can be proved using nonstandard analysis can also be proved without it. The purpose of this paper is to show that this assertion is wrong, and in fact there are theorems which can be proved with nonstandard analysis but cannot be proved without it. There is currently a great deal of confusion among mathematicians because the above assertion can be interpreted in two different ways. First, there is the following correct statement: any theorem which can be proved using nonstandard analysis can be proved in Zermelo-Fraenkel set theory with choice, ZFC, and thus is acceptable by contemporary standards as a theorem in mathematics. Second, there is the erroneous conclusion drawn by skeptics: any theorem which can be proved using nonstandard analysis can be proved without it, and thus there is no need for nonstandard analysis.The reason for this confusion is that the set of principles which are accepted by current mathematics, namely ZFC, is much stronger than the set of principles which are actually used in mathematical practice. It has been observed (see [F] and [S]) that almost all results in classical mathematics use methods available in second order arithmetic with appropriate comprehension and choice axiom schemes.


2000 ◽  
Vol 65 (3) ◽  
pp. 1014-1030 ◽  
Author(s):  
Miklós Erdélyi-Szabó

AbstractWe show that true first-order arithmetic is interpretable over the real-algebraic structure of models of intuitionistic analysis built upon a certain class of complete Heyting algebras. From this the undecidability of the structures follows. We also show that Scott's model is equivalent to true second-order arithmetic. In the appendix we argue that undecidability on the language of ordered rings follows from intuitionistically plausible properties of the real numbers.


1984 ◽  
Vol 49 (4) ◽  
pp. 1339-1349 ◽  
Author(s):  
D. Van Dalen

Among the more traditional semantics for intuitionistic logic the Beth and the Kripke semantics seem well-suited for direct manipulations required for the derivation of metamathematical results. In particular Smoryński demonstrated the usefulness of Kripke models for the purpose of obtaining closure properties for first-order arithmetic, [S], and second-order arithmetic, [J-S]. Weinstein used similar techniques to handle intuitionistic analysis, [W]. Since, however, Beth-models seem to lend themselves better for dealing with analysis, cf. [D], we have developed a somewhat more liberal semantics, that shares the features of both Kripke and Beth semantics, in order to obtain analogues of Smoryński's collecting operations, which we will call Smoryński-glueing, in line with the categorical tradition.


Computability ◽  
2021 ◽  
pp. 1-31
Author(s):  
Sam Sanders

The program Reverse Mathematics (RM for short) seeks to identify the axioms necessary to prove theorems of ordinary mathematics, usually working in the language of second-order arithmetic L 2 . A major theme in RM is therefore the study of structures that are countable or can be approximated by countable sets. Now, countable sets must be represented by sequences here, because the higher-order definition of ‘countable set’ involving injections/bijections to N cannot be directly expressed in L 2 . Working in Kohlenbach’s higher-order RM, we investigate various central theorems, e.g. those due to König, Ramsey, Bolzano, Weierstrass, and Borel, in their (often original) formulation involving the definition of ‘countable set’ based on injections/bijections to N. This study turns out to be closely related to the logical properties of the uncountably of R, recently developed by the author and Dag Normann. Now, ‘being countable’ can be expressed by the existence of an injection to N (Kunen) or the existence of a bijection to N (Hrbacek–Jech). The former (and not the latter) choice yields ‘explosive’ theorems, i.e. relatively weak statements that become much stronger when combined with discontinuous functionals, even up to Π 2 1 - CA 0 . Nonetheless, replacing ‘sequence’ by ‘countable set’ seriously reduces the first-order strength of these theorems, whatever the notion of ‘set’ used. Finally, we obtain ‘splittings’ involving e.g. lemmas by König and theorems from the RM zoo, showing that the latter are ‘a lot more tame’ when formulated with countable sets.


2015 ◽  
Vol 80 (3) ◽  
pp. 1035-1065 ◽  
Author(s):  
FARIDA KACHAPOVA

AbstractIn this paper we describe an intuitionistic theory SLP. It is a relatively strong theory containing intuitionistic principles for functionals of many types, in particular, the theory of the “creating subject”, axioms for lawless functionals and some versions of choice axioms. We construct a Beth model for the language of intuitionistic functionals of high types and use it to prove the consistency of SLP.We also prove that the intuitionistic theory SLP is equiconsistent with a classical theory TI. TI is a typed set theory, where the comprehension axiom for sets of type n is restricted to formulas with no parameters of types > n. We show that each fragment of SLP with types ≤ s is equiconsistent with the corresponding fragment of TI and that it is stronger than the previous fragment of SLP. Thus, both SLP and TI are much stronger than the second order arithmetic. By constructing the intuitionistic theory SLP and interpreting in it the classical set theoryTI, we contribute to the program of justifying classical mathematics from the intuitionistic point of view.


2000 ◽  
Vol 65 (4) ◽  
pp. 1785-1812 ◽  
Author(s):  
Jeremy Avigad

AbstractA number of classical theories are interpreted in analogous theories that are based on intuitionistic logic. The classical theories considered include subsystems of first- and second-order arithmetic, bounded arithmetic, and admissible set theory.


Sign in / Sign up

Export Citation Format

Share Document