Shadows of Syntax

Author(s):  
Jared Warren

What is the source of logical and mathematical truth? This volume revitalizes conventionalism as an answer to this question. Conventionalism takes logical and mathematical truth to have their source in linguistic conventions. This was an extremely popular view in the early 20th century, but it was never worked out in detail and is now almost universally rejected in mainstream philosophical circles. In Shadows of Syntax, Jared Warren offers the first book-length treatment and defense of a combined conventionalist theory of logic and mathematics. He argues that our conventions, in the form of syntactic rules of language use, are perfectly suited to explain the truth, necessity, and a priority of logical and mathematical claims. In Part I, Warren explains exactly what conventionalism amounts to and what linguistic conventions are. Part II develops an unrestricted inferentialist theory of the meanings of logical constants that leads to logical conventionalism. This conventionalist theory is elaborated in discussions of logical pluralism, the epistemology of logic, and of the influential objections that led to the historical demise of conventionalism. Part III aims to extend conventionalism from logic to mathematics. Unlike logic, mathematics involves both ontological commitments and a rich notion of truth that cannot be generated by any algorithmic process. To address these issues Warren develops conventionalist-friendly but independently plausible theories of both metaontology and mathematical truth. Finally, Part IV steps back to address big picture worries and meta-worries about conventionalism. This book develops and defends a unified theory of logic and mathematics according to which logical and mathematical truths are reflections of our linguistic rules, mere shadows of syntax.

Author(s):  
Rosanna Keefe ◽  
Jessica Leech

According to an increasingly popular view, the source of logical necessity is to be found in the essences of logical entities. One might be tempted to extend the view further in using it to tackle fundamental questions surrounding logical consequence. This chapter enquires: how does a view according to which the facts about logical consequence are determined by the essences of logical entities look in detail? Are there any more or less obvious problems arising for such a view? The chapter uncovers a prima facie result in favour of logical pluralism. However, it then goes on to raise some concerns for this result. It argues that, considered generally, it is difficult to see how essence could do all of the requisite work alone. The chapter also shows how considering things from the perspective of disputes between particular rival logics makes an interesting and important difference to the picture of things presented by the essentialist account.


1990 ◽  
Vol 83 (4) ◽  
pp. 258-262
Author(s):  
Maurice J. Burke

I nformal methods for discovering and demonstrating geometric principles are commonly used in mathematics classrooms. This article demonstrates an informal method that I have used successfully in workshops and mathematics classes for the past five years. It helps to show that spatial visualization and analogy can be useful informal tools. The article also recommends a cluster approach when studying propositions of informal geometry.


1937 ◽  
Vol 2 (2) ◽  
pp. 78-81
Author(s):  
Arnold F. Emch

Inasmuch as my discussion of the formal properties of “System L,” presented elsewhere, is substantially the same as that presented by Lewis in a previous issue of this Journal, I shall confine my remarks in this brief rejoinder to the problem of deducibility with respect to necessary and impossible propositions—since it is specifically the solution of this problem which will determine the adequacy of the calculus of logical implication as a canon and critique of deductive inference.The problem, as originally presented, can be stated succinctly in the following inconsistent triad: (1) All necessarily true principles are logically dependent on or deducible from one another, i.e., are equivalent, (2) All principles of logic and mathematics are necessarily true, and (3) Some principles of logic and mathematics are not logically dependent on or deducible from one another, i.e., are independent. These three propositions are logically inconsistent in the sense that the conjunction of any two of them will logically imply the falsity of the third. Now if Lewis should affirm the first and third of these propositions, then he should concede the falsity of the second. This concession, it was believed, he was not likely to make, inasmuch as it would seem to involve a repudiation of a doctrine (the tautological character of every logical and mathematical truth) with which he has particularly identified himself. If, in view of this fact, he should accept, instead, the second and third of these propositions, then he should concede the falsity of the first.


Author(s):  
Alex Orenstein

Quine is the foremost representative of naturalism in the second half of the twentieth century. His naturalism consists of an insistence upon a close connection or alliance between philosophical views and those of the natural sciences. Philosophy so construed is an activity within nature wherein nature examines itself. This contrasts with views which distinguish philosophy from science and place philosophy in a special transcendent position for gaining special knowledge. The methods of science are empirical; so Quine, who operates within a scientific perspective, is an empiricist, but with a difference. Traditional empiricism, as in Locke, Berkeley, Hume and some twentieth-century forms, takes impressions, ideas or sense-data as the basic units of thought. Quine’s empiricism, by contrast, takes account of the theoretical as well as the observational facets of science. The unit of empirical significance is not simple impressions (ideas) or even isolated individual observation sentences, but systems of beliefs. The broad theoretical constraints for choice between theories, such as explanatory power, parsimony, precision and so on, are foremost in this empiricism. He is a fallibilist, since he holds that each individual belief in a system is in principle revisable. Quine proposes a new conception of observation sentences, a naturalized account of our knowledge of the external world, including a rejection of a priori knowledge, and he extends the same empiricist and fallibilist account to our knowledge of logic and mathematics. Quine confines logic to first-order logic and clearly demarcates it from set theory and mathematics. These are all empirical subjects when empiricism is understood in its Quinian form. They are internal to our system of beliefs that make up the natural sciences. The language of first-order logic serves as a canonical notation in which to express our ontological commitments. The slogan ‘To be is to be the value of a variable’ ([1953] 1961: 15) encapsulates this project. Deciding which ontology to accept is also carried out within the naturalistic constraints of empirical science – our ontological commitments should be to those objects to which the best scientific theories commit us. On this basis Quine’s own commitments are to physical objects and sets. Quine is a physicalist and a Platonist, since the best sciences require physical objects and the mathematics involved in the sciences requires abstract objects, namely, sets. The theory of reference (which includes notions such as reference, truth and logical truth) is sharply demarcated from the theory of meaning (which includes notions such as meaning, synonymy, the analytic–synthetic distinction and necessity). Quine is the leading critic of notions from the theory of meaning, arguing that attempts to make the distinction between merely linguistic (analytic) truths and more substantive (synthetic) truths has failed. They do not meet the standards of precision which scientific and philosophical theories adhere to and which are adhered to in the theory of reference. He explores the limits of an empirical theory of language and offers a thesis of the indeterminacy of translation as further criticism of the theory of meaning.


2021 ◽  
pp. 1-19
Author(s):  
Benjamin Marschall

Abstract Rudolf Carnap’s principle of tolerance states that there is no need to justify the adoption of a logic by philosophical means. Carnap uses the freedom provided by this principle in his philosophy of mathematics: he wants to capture the idea that mathematical truth is a matter of linguistic rules by relying on a strong metalanguage with infinitary inference rules. In this paper, I give a new interpretation of an argument by E. W. Beth, which shows that the principle of tolerance does not suffice to remove all obstacles to the employment of infinitary rules.


2020 ◽  
pp. 197-208
Author(s):  
Jared Warren

Part II (chapters 3-7) of the book developed and defended an inferentialist/conventionalist theory of logic. In this, the opening chapter of part III, it is explained why the extension of part II’s approach from logic to mathematics faces significant philosophical challenges. The first major challenge concerns the ontological commitments of mathematics. It is received wisdom in philosophy that existence claims cannot be analytic or trivially true, making it difficult to see how a conventionalist account of mathematics could possibly be viable. The second major challenge concerns mathematical truth. Unlike (first-order) logical truth, mathematical truth, even in basic arithmetic, is computationally rich. There are serious challenges for conventionalists in trying to capture our intuition that mathematical truth is fully determinate, in light of this feature.


2020 ◽  
pp. 3-20
Author(s):  
Jared Warren

This chapter begins by briefly discussing the historical fortunes of conventionalism in logic and mathematics. After this, it clarifies the nature of conventionalism; critically discussing various ways to understand the view before arguing that conventionalism is best understood as an explanatory claim – logical and mathematical facts in any language are fully explained by the linguistic conventions of that language. The chapter then turns to discussing the notion of “explanation” itself, arguing that conventionalists don’t need to make any controversial assumptions about the nature of explanation. Finally, the chapter discusses what it is that makes a putative explanation acceptable to naturalists, singling out two key constraints on naturalistic explanations (one metaphysical and the other cognitive).


2008 ◽  
Vol 17 (3) ◽  
pp. 87-92
Author(s):  
Leonard L. LaPointe

Abstract Loss of implicit linguistic competence assumes a loss of linguistic rules, necessary linguistic computations, or representations. In aphasia, the inherent neurological damage is frequently assumed by some to be a loss of implicit linguistic competence that has damaged or wiped out neural centers or pathways that are necessary for maintenance of the language rules and representations needed to communicate. Not everyone agrees with this view of language use in aphasia. The measurement of implicit language competence, although apparently necessary and satisfying for theoretic linguistics, is complexly interwoven with performance factors. Transience, stimulability, and variability in aphasia language use provide evidence for an access deficit model that supports performance loss. Advances in understanding linguistic competence and performance may be informed by careful study of bilingual language acquisition and loss, the language of savants, the language of feral children, and advances in neuroimaging. Social models of aphasia treatment, coupled with an access deficit view of aphasia, can salve our restless minds and allow pursuit of maximum interactive communication goals even without a comfortable explanation of implicit linguistic competence in aphasia.


JAMA ◽  
1965 ◽  
Vol 194 (3) ◽  
pp. 269-272
Author(s):  
J. T. Apter
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document