Logic and Philosophical Methodology

Author(s):  
John P. Burgess

This article explores the role of logic in philosophical methodology, as well as its application in philosophy. The discussion gives a roughly equal coverage to the seven branches of logic: elementary logic, set theory, model theory, recursion theory, proof theory, extraclassical logics, and anticlassical logics. Mathematical logic comprises set theory, model theory, recursion theory, and proof theory. Philosophical logic in the relevant sense is divided into the study of extensions of classical logic, such as modal or temporal or deontic or conditional logics, and the study of alternatives to classical logic, such as intuitionistic or quantum or partial or paraconsistent logics. The nonclassical consists of the extraclassical and the anticlassical, although the distinction is not clearcut.

1975 ◽  
Vol 40 (2) ◽  
pp. 113-129 ◽  
Author(s):  
Harvey Friedman

This expository paper contains a list of 102 problems which, at the time of publication, are unsolved. These problems are distributed in four subdivisions of logic: model theory, proof theory and intuitionism, recursion theory, and set theory. They are written in the form of statements which we believe to be at least as likely as their negations. These should not be viewed as conjectures since, in some cases, we had no opinion as to which way the problem would go.In each case where we believe a problem did not originate with us, we made an effort to pinpoint a source. Often this was a difficult matter, based on subjective judgments. When we were unable to pinpoint a source, we left a question mark. No inference should be drawn concerning the beliefs of the originator of a problem as to which way it will go (lest the originator be us).The choice of these problems was based on five criteria. Firstly, we are only including problems which call for the truth value of a particular mathematical statement. A second criterion is the extent to which the concepts involved in the statements are concepts that are well known, well denned, and well understood, as well as having been extensively considered in the literature. A third criterion is the extent to which these problems have natural, simple and attractive formulations. A fourth criterion is the extent to which there is evidence that a real difficulty exists in finding a solution. Lastly and unavoidably, the extent to which these problems are connected with the author's research interests in mathematical logic.


2001 ◽  
Vol 7 (2) ◽  
pp. 169-196 ◽  
Author(s):  
Samuel R. Buss ◽  
Alexander S. Kechris ◽  
Anand Pillay ◽  
Richard A. Shore

AbstractThe four authors present their speculations about the future developments of mathematical logic in the twenty-first century. The areas of recursion theory, proof theory and logic for computer science, model theory, and set theory are discussed independently.


Author(s):  
Brian Ball

Timothy Williamson is a British analytic philosopher, who has made major contributions in philosophical logic, epistemology, metaphysics, the philosophy of language and philosophical methodology. Williamson has defended classical logic in connection with the sorites (or heap) paradox, by appeal to epistemicism, the view that vagueness is ignorance. His knowledge first approach has reversed the traditional order of explanation in epistemology. In metaphysics, he has argued in favour of necessitism – the view that what there is (ontology) is metaphysically necessary, not contingent. In the philosophy of language, he has argued that one must (in a certain privileged sense, constitutive of assertion) assert only what one knows; and he has defended a principle of charity according to which the best interpretations of a language maximize the attribution of knowledge (rather than true belief) to its speakers. Methodologically, Williamson opposes naturalism and defends instead the use of ‘armchair’ methods to answer substantive questions; in practice, his work is often characterized by the application of formal techniques, both logical and mathematical, to traditional philosophical problems.


1972 ◽  
Vol 37 (1) ◽  
pp. 81-89 ◽  
Author(s):  
Thomas J. Grilliot

Omitting-types theorems have been useful in model theory to construct models with special characteristics. For instance, one method of proving the ω-completeness theorem of Henkin [10] and Orey [20] involves constructing a model that omits the type {x ≠ 0, x ≠ 1, x ≠ 2,···} (i.e., {x ≠ 0, x ≠ 1, x ≠ 2,···} is not satisfiable in the model). Our purpose in this paper is to illustrate uses of omitting-types theorems in recursion theory. The Gandy-Kreisel-Tait Theorem [7] is the most well-known example. This theorem characterizes the class of hyperarithmetical sets as the intersection of all ω-models of analysis (the so-called hard core of analysis). The usual way to prove that a nonhyperarithmetical set does not belong to the hard core is to construct an ω-model of analysis that omits the type representing the set (Application 1). We will find basis results for and s — sets that are stronger than results previously known (Applications 2 and 3). The question of how far the natural hierarchy of hyperjumps extends was first settled by a forcing argument (Sacks) and subsequently by a compactness argument (Kripke, Richter). Another problem solved by a forcing argument (Sacks) and then by a compactness argument (Friedman-Jensen) was the characterization of the countable admissible ordinals as the relativized ω1's. Using omitting-types technique, we will supply a third kind of proof of these results (Applications 4 and 5). S. Simpson made a significant contribution in simplifying the proof of the latter result, with the interesting side effect that Friedman's result on ordinals in models of set theory is immediate (Application 6). One approach to abstract recursiveness and hyperarithmeticity on a countable set is to tenuously identify the set with the natural numbers. This approach is equivalent to other approaches to abstract recursion (Application 7). This last result may also be proved by a forcing method.


1982 ◽  
Vol 47 (4) ◽  
pp. 824-832 ◽  
Author(s):  
Louise Hay ◽  
Douglas Miller

Ever since Craig-Beth and Addison-Kleene proved their versions of the Lusin-Suslin theorem, work in model theory and recursion theory has demonstrated the value of classical descriptive set theory as a source of ideas and inspirations. During the sixties in particular, J.W. Addison refined the technique of “conjecture by analogy” and used it to generate a substantial number of results in both model theory and recursion theory (see, e.g., Addison [1], [2], [3]).During the past 15 years, techniques and results from recursion theory and model theory have played an important role in the development of descriptive set theory. (Moschovakis's book [6] is an excellent reference, particularly for the use of recursion-theoretic tools.) The use of “conjecture by analogy” as a means of transferring ideas from model theory and recursion theory to descriptive set theory has developed more slowly. Some notable recent examples of this phenomenon are in Vaught [9], where some results in invariant descriptive set theory reflecting and extending model-theoretic results are obtained and others are left as conjectures (including a version of the well-known conjecture on the number of countable models) and in Hrbacek and Simpson [4], where a notion analogous to that of Turing reducibility is used to study Borel isomorphism types. Moschovakis [6] describes in detail an effective descriptive set theory based in large part on classical recursion theory.


2004 ◽  
Vol 10 (3) ◽  
pp. 305-333 ◽  
Author(s):  
Jeremy Avigad

AbstractPaul Cohen's method of forcing, together with Saul Kripke's related semantics for modal and intuitionistic logic, has had profound effects on a number of branches of mathematical logic, from set theory and model theory to constructive and categorical logic. Here, I argue that forcing also has a place in traditional Hilbert-style proof theory, where the goal is to formalize portions of ordinary mathematics in restricted axiomatic theories, and study those theories in constructive or syntactic terms. I will discuss the aspects of forcing that are useful in this respect, and some sample applications. The latter include ways of obtaining conservation results for classical and intuitionistic theories, interpreting classical theories in constructive ones, and constructivizing model-theoretic arguments.


2014 ◽  
Vol 7 (3) ◽  
pp. 548-578 ◽  
Author(s):  
WALTER CARNIELLI ◽  
MARCELO E. CONIGLIO ◽  
RODRIGO PODIACKI ◽  
TARCÍSIO RODRIGUES

AbstractThis paper investigates the question of characterizing first-orderLFIs (logics of formal inconsistency) by means of two-valued semantics.LFIs are powerful paraconsistent logics that encode classical logic and permit a finer distinction between contradictions and inconsistencies, with a deep involvement in philosophical and foundational questions. Although focused on just one particular case, namely, the quantified logicQmbC, the method proposed here is completely general for this kind of logics, and can be easily extended to a large family of quantified paraconsistent logics, supplying a sound and complete semantical interpretation for such logics. However, certain subtleties involving term substitution and replacement, that are hidden in classical structures, have to be taken into account when one ventures into the realm of nonclassical reasoning. This paper shows how such difficulties can be overcome, and offers detailed proofs showing that a smooth treatment of semantical characterization can be given to all such logics. Although the paper is well-endowed in technical details and results, it has a significant philosophical aside: it shows how slight extensions of classical methods can be used to construct the basic model theory of logics that are weaker than traditional logic due to the absence of certain rules present in classical logic. Several such logics, however, as in the case of theLFIs treated here, are notorious for their wealth of models precisely because they do not make indiscriminate use of certain rules; these models thus require new methods. In the case of this paper, by just appealing to a refined version of the Principle of Explosion, or Pseudo-Scotus, some new constructions and crafty solutions to certain nonobvious subtleties are proposed. The result is that a richer extension of model theory can be inaugurated, with interest not only for paraconsistency, but hopefully to other enlargements of traditional logic.


Sign in / Sign up

Export Citation Format

Share Document