scholarly journals On expandability of models of arithmetic and set theory to models of weak second-order theories

1984 ◽  
Vol 122 (1) ◽  
pp. 57-60
Author(s):  
Matt Kaufmann
Author(s):  
Tim Button ◽  
Sean Walsh

In this chapter, the focus shifts from numbers to sets. Again, no first-order set theory can hope to get anywhere near categoricity, but Zermelo famously proved the quasi-categoricity of second-order set theory. As in the previous chapter, we must ask who is entitled to invoke full second-order logic. That question is as subtle as before, and raises the same problem for moderate modelists. However, the quasi-categorical nature of Zermelo's Theorem gives rise to some specific questions concerning the aims of axiomatic set theories. Given the status of Zermelo's Theorem in the philosophy of set theory, we include a stand-alone proof of this theorem. We also prove a similar quasi-categoricity for Scott-Potter set theory, a theory which axiomatises the idea of an arbitrary stage of the iterative hierarchy.


Axiomathes ◽  
2021 ◽  
Author(s):  
Andrew Powell

AbstractThis article provides a survey of key papers that characterise computable functions, but also provides some novel insights as follows. It is argued that the power of algorithms is at least as strong as functions that can be proved to be totally computable in type-theoretic translations of subsystems of second-order Zermelo Fraenkel set theory. Moreover, it is claimed that typed systems of the lambda calculus give rise naturally to a functional interpretation of rich systems of types and to a hierarchy of ordinal recursive functionals of arbitrary type that can be reduced by substitution to natural number functions.


2013 ◽  
Vol 23 (6) ◽  
pp. 1234-1256 ◽  
Author(s):  
THOMAS STREICHER

In a sequence of papers (Krivine 2001; Krivine 2003; Krivine 2009), J.-L. Krivine introduced his notion of classical realisability for classical second-order logic and Zermelo–Fraenkel set theory. Moreover, in more recent work (Krivine 2008), he has considered forcing constructions on top of it with the ultimate aim of providing a realisability interpretation for the axiom of choice.The aim of the current paper is to show how Krivine's classical realisability can be understood as an instance of the categorical approach to realisability as started by Martin Hyland in Hyland (1982) and described in detail in van Oosten (2008). Moreover, we will give an intuitive explanation of the iteration of realisability as described in Krivine (2008).


2010 ◽  
Vol 16 (1) ◽  
pp. 1-36 ◽  
Author(s):  
Peter Koellner

AbstractIn this paper we investigate strong logics of first and second order that have certain absoluteness properties. We begin with an investigation of first order logic and the strong logics ω-logic and β-logic, isolating two facets of absoluteness, namely, generic invariance and faithfulness. It turns out that absoluteness is relative in the sense that stronger background assumptions secure greater degrees of absoluteness. Our aim is to investigate the hierarchies of strong logics of first and second order that are generically invariant and faithful against the backdrop of the strongest large cardinal hypotheses. We show that there is a close correspondence between the two hierarchies and we characterize the strongest logic in each hierarchy. On the first-order side, this leads to a new presentation of Woodin's Ω-logic. On the second-order side, we compare the strongest logic with full second-order logic and argue that the comparison lends support to Quine's claim that second-order logic is really set theory in sheep's clothing.


1985 ◽  
Vol 50 (2) ◽  
pp. 375-379 ◽  
Author(s):  
Thomas J. Grilliot

One long-range objective of logic is to find models of arithmetic with noteworthy properties, perhaps properties that imply some long-standing number theoretic conjectures. In areas of mathematics such as algebra or set theory, new models are often made by extending old models, that is, by adjoining new elements to already existing models. Usually the extension retains most of the characteristics of the old model with at least one exception that makes the new model interesting. However, such a scheme is difficult in the area of arithmetic. Many interesting properties of the fine structure of arithmetic are diophantine and hence unchangeable in extensions. For instance, one cannot change a prime number into a composite one by adjoining new elements.One could possibly get around this diophantine difficulty in one of two ways. One way is to change the usual language of addition and multiplication to an equivalent language that does not transmit so much information to extensions. For instance, multiplication is definable from the squaring function, as one sees from the identity 2xy = (x + y)2 − x2 − y2, and the squaring function in turn is definable either from the unary square predicate (as one sees from the fact that n = m2 if n and n + 2m + 1 are successive squares) or from the divisor relation (as one sees from the fact that n = m2 if n is the smallest number such that m divides n and m + 1 divides n + m). Either of these two alternatives to multiplication might make for interesting extensions.


2019 ◽  
Vol 84 (02) ◽  
pp. 589-620
Author(s):  
KAMERYN J. WILLIAMS

AbstractIn this article I investigate the phenomenon of minimum and minimal models of second-order set theories, focusing on Kelley–Morse set theory KM, Gödel–Bernays set theory GB, and GB augmented with the principle of Elementary Transfinite Recursion. The main results are the following. (1) A countable model of ZFC has a minimum GBC-realization if and only if it admits a parametrically definable global well order. (2) Countable models of GBC admit minimal extensions with the same sets. (3) There is no minimum transitive model of KM. (4) There is a minimum β-model of GB+ETR. The main question left unanswered by this article is whether there is a minimum transitive model of GB+ETR.


Author(s):  
Wilfried Sieg

Proof theory is a branch of mathematical logic founded by David Hilbert around 1920 to pursue Hilbert’s programme. The problems addressed by the programme had already been formulated, in some sense, at the turn of the century, for example, in Hilbert’s famous address to the First International Congress of Mathematicians in Paris. They were closely connected to the set-theoretic foundations for analysis investigated by Cantor and Dedekind – in particular, to difficulties with the unrestricted notion of system or set; they were also related to the philosophical conflict with Kronecker on the very nature of mathematics. At that time, the central issue for Hilbert was the ‘consistency of sets’ in Cantor’s sense. Hilbert suggested that the existence of consistent sets, for example, the set of real numbers, could be secured by proving the consistency of a suitable, characterizing axiom system, but indicated only vaguely how to give such proofs model-theoretically. Four years later, Hilbert departed radically from these indications and proposed a novel way of attacking the consistency problem for theories. This approach required, first of all, a strict formalization of mathematics together with logic; then, the syntactic configurations of the joint formalism would be considered as mathematical objects; finally, mathematical arguments would be used to show that contradictory formulas cannot be derived by the logical rules. This two-pronged approach of developing substantial parts of mathematics in formal theories (set theory, second-order arithmetic, finite type theory and still others) and of proving their consistency (or the consistency of significant sub-theories) was sharpened in lectures beginning in 1917 and then pursued systematically in the 1920s by Hilbert and a group of collaborators including Paul Bernays, Wilhelm Ackermann and John von Neumann. In particular, the formalizability of analysis in a second-order theory was verified by Hilbert in those very early lectures. So it was possible to focus on the second prong, namely to establish the consistency of ‘arithmetic’ (second-order number theory and set theory) by elementary mathematical, ‘finitist’ means. This part of the task proved to be much more recalcitrant than expected, and only limited results were obtained. That the limitation was inevitable was explained in 1931 by Gödel’s theorems; indeed, they refuted the attempt to establish consistency on a finitist basis – as soon as it was realized that finitist considerations could be carried out in a small fragment of first-order arithmetic. This led to the formulation of a general reductive programme. Gentzen and Gödel made the first contributions to this programme by establishing the consistency of classical first-order arithmetic – Peano arithmetic (PA) – relative to intuitionistic arithmetic – Heyting arithmetic. In 1936 Gentzen proved the consistency of PA relative to a quantifier-free theory of arithmetic that included transfinite recursion up to the first epsilon number, ε0; in his 1941 Yale lectures, Gödel proved the consistency of the same theory relative to a theory of computable functionals of finite type. These two fundamental theorems turned out to be most important for subsequent proof-theoretic work. Currently it is known how to analyse, in Gentzen’s style, strong subsystems of second-order arithmetic and set theory. The first prong of proof-theoretic investigations, the actual formal development of parts of mathematics, has also been pursued – with a surprising result: the bulk of classical analysis can be developed in theories that are conservative over (fragments of) first-order arithmetic.


Sign in / Sign up

Export Citation Format

Share Document