Logic in the early 20th century

Author(s):  
Gregory H. Moore

The creation of modern logic is one of the most stunning achievements of mathematics and philosophy in the twentieth century. Modern logic – sometimes called logistic, symbolic logic or mathematical logic – makes essential use of artificial symbolic languages. Since Aristotle, logic has been a part of philosophy. Around 1850 the mathematician Boole began the modern development of symbolic logic. During the twentieth century, logic continued in philosophy departments, but it began to be seriously investigated and taught in mathematics departments as well. The most important examples of the latter were, from 1905 on, Hilbert at Göttingen and then, during the 1920s, Church at Princeton. As the twentieth century began, there were several distinct logical traditions. Besides Aristotelian logic, there was an active tradition in algebraic logic initiated by Boole in the UK and continued by C.S. Peirce in the USA and Schröder in Germany. In Italy, Peano began in the Boolean tradition, but soon aimed higher: to express all major mathematical theorems in his symbolic logic. Finally, from 1879 to 1903, Frege consciously deviated from the Boolean tradition by creating a logic strong enough to construct the natural and real numbers. The Boole–Schröder tradition culminated in the work of Löwenheim (1915) and Skolem (1920) on the existence of a countable model for any first-order axiom system having a model. Meanwhile, in 1900, Russell was strongly influenced by Peano’s logical symbolism. Russell used this as the basis for his own logic of relations, which led to his logicism: pure mathematics is a part of logic. But his discovery of Russell’s paradox in 1901 required him to build a new basis for logic. This culminated in his masterwork, Principia Mathematica, written with Whitehead, which offered the theory of types as a solution. Hilbert came to logic from geometry, where models were used to prove consistency and independence results. He brought a strong concern with the axiomatic method and a rejection of the metaphysical goal of determining what numbers ‘really’ are. In his view, any objects that satisfied the axioms for numbers were numbers. He rejected the genetic method, favoured by Frege and Russell, which emphasized constructing numbers rather than giving axioms for them. In his 1917 lectures Hilbert was the first to introduce first-order logic as an explicit subsystem of all of logic (which, for him, was the theory of types) without the infinitely long formulas found in Löwenheim. In 1923 Skolem, directly influenced by Löwenheim, also abandoned those formulas, and argued that first-order logic is all of logic. Influenced by Hilbert and Ackermann (1928), Gödel proved the completeness theorem for first-order logic (1929) as well as incompleteness theorems for arithmetic in first-order and higher-order logics (1931). These results were the true beginning of modern logic.

2001 ◽  
Vol 7 (4) ◽  
pp. 441-484 ◽  
Author(s):  
José Ferreirós

AbstractThis paper aims to outline an analysis and interpretation of the process that led to First-Order Logic and its consolidation as a core system of modern logic. We begin with an historical overview of landmarks along the road to modern logic, and proceed to a philosophical discussion casting doubt on the possibility of a purely rational justification of the actual delimitation of First-Order Logic. On this basis, we advance the thesis that a certain historical tradition was essential to the emergence of modern logic; this traditional context is analyzed as consisting in some guiding principles and, particularly, a set of exemplars (i.e., paradigmatic instances). Then, we proceed to interpret the historical course of development reviewed in section 1, which can broadly be described as a two-phased movement of expansion and then restriction of the scope of logical theory. We shall try to pinpoint ambivalencies in the process, and the main motives for subsequent changes. Among the latter, one may emphasize the spirit of modern axiomatics, the situation of foundational insecurity in the 1920s, the resulting desire to find systems well-behaved from a proof-theoretical point of view, and the metatheoretical results of the 1930s. Not surprisingly, the mathematical and, more specifically, the foundational context in which First-Order Logic matured will be seen to have played a primary role in its shaping.Mathematical logic is what logic, through twenty-five centuries and a few transformations, has become today. (Jean van Heijenoort)


Author(s):  
Tarek Sayed Ahmed

Fix \(2 < n < \omega\). Let \(L_n\) denote first order logic restricted to the first $n$ variables. Using the machinery of algebraic logic, positive and negative results on omitting types are obtained for \(L_n\) and for infinitary variants and extensions of \(L_{\omega, \omega}\).


Author(s):  
John W. Dawson

The greatest logician of the twentieth century, Gödel is renowned for his advocacy of mathematical Platonism and for three fundamental theorems in logic: the completeness of first-order logic; the incompleteness of formalized arithmetic; and the consistency of the axiom of choice and the continuum hypothesis with the axioms of Zermelo–Fraenkel set theory.


Author(s):  
Timothy Smiley

The predicate calculus is the dominant system of modern logic, having displaced the traditional Aristotelian syllogistic logic that had been the previous paradigm. Like Aristotle’s, it is a logic of quantifiers – words like ‘every’, ‘some’ and ‘no’ that are used to express that a predicate applies universally or with some other distinctive kind of generality, for example ‘everyone is mortal’, ‘someone is mortal’, ‘no one is mortal’. The weakness of syllogistic logic was its inability to represent the structure of complex predicates. Thus it could not cope with argument patterns like ‘everything Fs and Gs, so everything Fs’. Nor could it cope with relations, because a logic of relations must be able to analyse cases where a quantifier is applied to a predicate that already contains one, as in ‘someone loves everyone’. Remedying the weakness required two major innovations. One was a logic of connectives – words like ‘and’, ‘or’ and ‘if’ that form complex sentences out of simpler ones. It is often studied as a distinct system: the propositional calculus. A proposition here is a true-or-false sentence and the guiding principle of propositional calculus is truth-functionality, meaning that the truth-value (truth or falsity) of a compound proposition is uniquely determined by the truth-values of its components. Its principal connectives are negation, conjunction, disjunction and a ‘material’ (that is, truth-functional) conditional. Truth-functionality makes it possible to compute the truth-values of propositions of arbitrary complexity in terms of their basic propositional constituents, and so develop the logic of tautology and tautological consequence (logical truth and consequence in virtue of the connectives). The other invention was the quantifier-variable notation. Variables are letters used to indicate things in an unspecific way; thus ‘x is mortal’ is read as predicating of an unspecified thing x what ‘Socrates is mortal’ predicates of Socrates. The connectives can now be used to form complex predicates as well as propositions, for example ‘x is human and x is mortal’; while different variables can be used in different places to express relational predicates, for example ‘x loves y’. The quantifier goes in front of the predicate it governs, with the relevant variable repeated beside it to indicate which positions are being generalized. These radical departures from the idiom of quantification in natural languages are needed to solve the further problem of ambiguity of scope. Compare, for example, the ambiguity of ‘someone loves everyone’ with the unambiguous alternative renderings ‘there is an x such that for every y, x loves y’ and ‘for every y, there is an x such that x loves y’. The result is a pattern of formal language based on a non-logical vocabulary of names of things and primitive predicates expressing properties and relations of things. The logical constants are the truth-functional connectives and the universal and existential quantifiers, plus a stock of variables construed as ranging over things. This is ‘the’ predicate calculus. A common option is to add the identity sign as a further logical constant, producing the predicate calculus with identity. The first modern logic of quantification, Frege’s of 1879, was designed to express generalizations not only about individual things but also about properties of individuals. It would nowadays be classified as a second-order logic, to distinguish it from the first-order logic described above. Second-order logic is much richer in expressive power than first-order logic, but at a price: first-order logic can be axiomatized, second-order logic cannot.


2013 ◽  
Vol 19 (4) ◽  
pp. 433-472 ◽  
Author(s):  
Georg Schiemer ◽  
Erich H. Reck

AbstractIn historical discussions of twentieth-century logic, it is typically assumed that model theory emerged within the tradition that adopted first-order logic as the standard framework. Work within the type-theoretic tradition, in the style of Principia Mathematica, tends to be downplayed or ignored in this connection. Indeed, the shift from type theory to first-order logic is sometimes seen as involving a radical break that first made possible the rise of modern model theory. While comparing several early attempts to develop the semantics of axiomatic theories in the 1930s, by two proponents of the type-theoretic tradition (Carnap and Tarski) and two proponents of the first-order tradition (Gödel and Hilbert), we argue that, instead, the move from type theory to first-order logic is better understood as a gradual transformation, and further, that the contributions to semantics made in the type-theoretic tradition should be seen as central to the evolution of model theory.


2002 ◽  
Vol 8 (3) ◽  
pp. 348-379 ◽  
Author(s):  
Robin Hirsch ◽  
Ian Hodkinson ◽  
Roger D. Maddux

AbstractFor every finite n ≥ 4 there is a logically valid sentence φn with the following properties: φn contains only 3 variables (each of which occurs many times); φn contains exactly one nonlogical binary relation symbol (no function symbols, no constants, and no equality symbol); φn has a proof in first-order logic with equality that contains exactly n variables, but no proof containing only n − 1 variables. This result was first proved using the machinery of algebraic logic developed in several research monographs and papers. Here we replicate the result and its proof entirely within the realm of (elementary) first-order binary predicate logic with equality. We need the usual syntax, axioms, and rules of inference to show that φn has a proof with only n variables. To show that φn has no proof with only n − 1 variables we use alternative semantics in place of the usual, standard, set-theoretical semantics of first-order logic.


2008 ◽  
Vol 73 (1) ◽  
pp. 65-89 ◽  
Author(s):  
Hajnal Andréka ◽  
István Németi ◽  
Tarek Sayed Ahmed

AbstractWe give a novel application of algebraic logic to first order logic. A new, flexible construction is presented for representable but not completely representable atomic relation and cylindric algebras of dimension n (for finite n > 2) with the additional property that they are one-generated and the set of all n by n atomic matrices forms a cylindric basis. We use this construction to show that the classical Henkin-Orey omitting types theorem fails for the finite variable fragments of first order logic as long as the number of variables available is > 2 and we have a binary relation symbol in our language. We also prove a stronger result to the effect that there is no finite upper bound for the extra variables needed in the witness formulas. This result further emphasizes the ongoing interplay between algebraic logic and first order logic.


Author(s):  
Alex Orenstein

Quine is the foremost representative of naturalism in the second half of the twentieth century. His naturalism consists of an insistence upon a close connection or alliance between philosophical views and those of the natural sciences. Philosophy so construed is an activity within nature wherein nature examines itself. This contrasts with views which distinguish philosophy from science and place philosophy in a special transcendent position for gaining special knowledge. The methods of science are empirical; so Quine, who operates within a scientific perspective, is an empiricist, but with a difference. Traditional empiricism, as in Locke, Berkeley, Hume and some twentieth-century forms, takes impressions, ideas or sense-data as the basic units of thought. Quine’s empiricism, by contrast, takes account of the theoretical as well as the observational facets of science. The unit of empirical significance is not simple impressions (ideas) or even isolated individual observation sentences, but systems of beliefs. The broad theoretical constraints for choice between theories, such as explanatory power, parsimony, precision and so on, are foremost in this empiricism. He is a fallibilist, since he holds that each individual belief in a system is in principle revisable. Quine proposes a new conception of observation sentences, a naturalized account of our knowledge of the external world, including a rejection of a priori knowledge, and he extends the same empiricist and fallibilist account to our knowledge of logic and mathematics. Quine confines logic to first-order logic and clearly demarcates it from set theory and mathematics. These are all empirical subjects when empiricism is understood in its Quinian form. They are internal to our system of beliefs that make up the natural sciences. The language of first-order logic serves as a canonical notation in which to express our ontological commitments. The slogan ‘To be is to be the value of a variable’ ([1953] 1961: 15) encapsulates this project. Deciding which ontology to accept is also carried out within the naturalistic constraints of empirical science – our ontological commitments should be to those objects to which the best scientific theories commit us. On this basis Quine’s own commitments are to physical objects and sets. Quine is a physicalist and a Platonist, since the best sciences require physical objects and the mathematics involved in the sciences requires abstract objects, namely, sets. The theory of reference (which includes notions such as reference, truth and logical truth) is sharply demarcated from the theory of meaning (which includes notions such as meaning, synonymy, the analytic–synthetic distinction and necessity). Quine is the leading critic of notions from the theory of meaning, arguing that attempts to make the distinction between merely linguistic (analytic) truths and more substantive (synthetic) truths has failed. They do not meet the standards of precision which scientific and philosophical theories adhere to and which are adhered to in the theory of reference. He explores the limits of an empirical theory of language and offers a thesis of the indeterminacy of translation as further criticism of the theory of meaning.


Sign in / Sign up

Export Citation Format

Share Document