scholarly journals Introducing Identity

Author(s):  
Owen Griffiths ◽  
Arif Ahmed

AbstractThe best-known syntactic account of the logical constants is inferentialism . Following Wittgenstein’s thought that meaning is use, inferentialists argue that meanings of expressions are given by introduction and elimination rules. This is especially plausible for the logical constants, where standard presentations divide inference rules in just this way. But not just any rules will do, as we’ve learnt from Prior’s famous example of tonk, and the usual extra constraint is harmony. Where does this leave identity? It’s usually taken as a logical constant but it doesn’t seem harmonious: standardly, the introduction rule (reflexivity) only concerns a subset of the formulas canvassed by the elimination rule (Leibniz’s law). In response, Read [5, 8] and Klev [3] amend the standard approach. We argue that both attempts fail, in part because of a misconception regarding inferentialism and identity that we aim to identify and clear up.

2014 ◽  
Vol 7 (3) ◽  
pp. 499-510 ◽  
Author(s):  
OWEN GRIFFITHS

AbstractLogical inferentialists claim that the meanings of the logical constants are given by their inference rules. To rule out tonk-like expressions, it is often demanded that pairs of inference rules must be harmonious. The usual inference rules for the identity predicate are not harmonious, but most inferentialists want identity to be logical. Stephen Read has tried to formulate alternative, harmonious inference rules for identity. It will be proved, however, that his rules are precisely as strong as the old rules and that, because the old rules are not harmonious (as Read argues), nor are his. Further, it will be shown that no sound rules will be any improvement. Identity remains in need of satisfactory inferentialist treatment.


Author(s):  
Denis Bonnay ◽  
Benjamin Simmenauer

What is a logical constant? In which terms should we characterize the meaning of logical words like “and”, “or”, “implies”? An attractive answer is: in terms of their inferential roles, i.e. in terms of the role they play in building inferences. More precisely, we favor an approach, going back to Dosen and Sambin, in which the inferential role of a logical constant is captured by a double line rule which introduces it as reflecting structural links (for example, multiplicative conjunction reflects comma on the right of the turnstyle). Rule-based characterizations of logical constants are subject to the well known objection of Prior’s fake connective, tonk. We show that some double line rules also give rise to such pseudo logical constants. But then, we are able to find a property of a double line rules which guarantee that it defines a genuine logical constant. Thus we provide an alternative answer to Belnap’s requirement of conservatity in terms of a local requirement on double line rules.


Author(s):  
Erik C.W. Krabbe

Dialogical logic characterizes logical constants (such as ‘and’, ‘or’, ‘for all’) by their use in a critical dialogue between two parties: a proponent who has asserted a thesis and an opponent who challenges it. For each logical constant, a rule specifies how to challenge a statement that displays the corresponding logical form, and how to respond to such a challenge. These rules are incorporated into systems of regimented dialogue that are games in the game-theoretical sense. Dialogical concepts of logical consequence can then be based upon the concept of a winning strategy in a (formal) dialogue game: B is a logical consequence of A if and only if there is a winning strategy for the proponent of B against any opponent who is willing to concede A. But it should be stressed that there are several plausible (and non-equivalent) ways to draw up the rules.


1987 ◽  
Vol 52 (1) ◽  
pp. 89-110 ◽  
Author(s):  
M. W. Bunder

It is well known that combinatory logic with unrestricted introduction and elimination rules for implication is inconsistent in the strong sense that an arbitrary term Y is provable. The simplest proof of this, now usually called Curry's paradox, involves for an arbitrary term Y, a term X defined by X = Y(CPy).The fact that X = PXY = X ⊃ Y is an essential part of the proof.The paradox can be avoided by placing restrictions on the implication introduction rule or on the axioms from which it can be proved.In this paper we determine the forms that must be taken by inconsistency proofs of systems of propositional calculus based on combinatory logic, with arbitrary restrictions on both the introduction and elimination rules for the connectives. Generally such a proof will involve terms without normal form and cut formulas, i.e. formulas formed by an introduction rule that are immediately removed by an elimination with at most some equality steps intervening. (Such a sequence of steps we call a “cut”.)The above applies not only to the strong form of inconsistency defined above, but also to the weak form of inconsistency defined by: all propositions are provable, if this can be represented in the system.Any inconsistency proof of this kind of system can be reduced to one where the major premise of the elimination rule involved in the cut and its deduction must also appear in the deduction of the minor premise involved in the cut.We can, using this characterization of inconsistency proofs, put appropriate restrictions on certain introduction rules so that the systems, including a full classical propositional one, become provably consistent.


Author(s):  
Neil Tennant

We explicate the different ways that a first-order sentence can be true (resp., false) in a model M, as formal objects, called (M-relative) truth-makers (resp., falsity-makers). M-relative truth-makers and falsity-makers are co-inductively definable, by appeal to the “atomic facts” in M, and to certain rules of verification and of falsification, collectively called rules of evaluation. Each logical operator has a rule of verification, much like an introduction rule; and a rule of falsification, much like an elimination rule. Applications of the rules (∀) and (∃) involve infinite furcation when the domain of M is infinite. But even in the infinite case, truth-makers and falsity-makers are tree-like objects whose branches are at most finitely long. A sentence φ is true (resp., false) in a model M (in the sense of Tarski) if and only if there existsπ such that π is an M-relative truth-maker (resp., falsity-maker) for φ. With “ways of being true” explicated as these logical truthmakers, one can re-conceive logical consequence between given premises and a conclusion. It obtains just in case there is a suitable method for transforming M-relative truthmakers for the premises into an M-relative truthmaker for the conclusion, whatever the model M may be.


1966 ◽  
Vol 27 (1) ◽  
pp. 331-353 ◽  
Author(s):  
Katuzi Ono

The PRIMITIVE LOGIC LO introduced in my former work is a logic having only two logical constants IMPLICATION → and UNIVERSAL QUANTIFICATION ( ) with their usual inference rules which are admitted even in the INTUITIONISTIC PREDICATE LOGIC LJ. LO is really a very simple logic, maybe the simplest possible logic as one can imagine, but it is very important because of its universal character. In fact, popular logics such as the LOWER CLASSICAL PREDICATE LOGIC LK, the INTUITIONISTIC PREDICATE LOGIC LJ, the MINIMAL PREDICATE .LOGIC LM, etc. can be faithfully interpreted in it. Speaking frankly, I am further expecting that all the important logics would be interpreted faithfully in it and would disclose their intrinsic characteristics by being interpreted in it. Main purpose of this paper is to show the universal character of the primitive logic LO by pointing out that a series of typical logics are faithfully interpretable in LO.


2000 ◽  
Vol 65 (3) ◽  
pp. 979-1013 ◽  
Author(s):  
Giovanni Sambin ◽  
Giulia Battilotti ◽  
Claudia Faggian

AbstractWe introduce a sequent calculus B for a new logic, named basic logic. The aim of basic logic is to find a structure in the space of logics. Classical, intuitionistic. quantum and non-modal linear logics, are all obtained as extensions in a uniform way and in a single framework. We isolate three properties, which characterize B positively: reflection, symmetry and visibility.A logical constant obeys to the principle of reflection if it is characterized semantically by an equation binding it with a metalinguistic link between assertions, and if its syntactic inference rules are obtained by solving that equation. All connectives of basic logic satisfy reflection.To the control of weakening and contraction of linear logic, basic logic adds a strict control of contexts, by requiring that all active formulae in all rules are isolated, that is visible. From visibility, cut-elimination follows. The full, geometric symmetry of basic logic induces known symmetries of its extensions, and adds a symmetry among them, producing the structure of a cube.


1988 ◽  
Vol 53 (3) ◽  
pp. 673-695 ◽  
Author(s):  
Sidney C. Bailin

In this paper we present a normalization theorem for a natural deduction formulation of Zermelo set theory. Our result gets around M. Crabbe's counterexample to normalizability (Hallnäs [3]) by adding an inference rule of the formand requiring that this rule be used wherever it is applicable. Alternatively, we can regard the result as pertaining to a modified notion of normalization, in which an inferenceis never considered reducible if A is T Є T, even if R is an elimination rule and the major premise of R is the conclusion of an introduction rule. A third alternative is to regard (1) as a derived rule: using the general well-foundedness rulewe can derive (1). If we regard (2) as neutral with respect to the normality of derivations (i.e., (2) counts as neither an introduction nor an elimination rule), then the resulting proofs are normal.


1958 ◽  
Vol 23 (3) ◽  
pp. 289-308 ◽  
Author(s):  
W. Craig ◽  
R. L. Vaught

By a theory we shall always mean one of first order, having finitely many non-logical constants. Then for theories with identity (as a logical constant, the theory being closed under deduction in first-order logic with identity), and also likewise for theories without identity, one may distinguish the following three notions of axiomatizability. First, a theory may be recursively axiomatizable, or, as we shall say, simply, axiomatizable. Second, a theory may be finitely axiomatizable using additional predicates (f. a.+), in the syntactical sense introduced by Kleene [9]. Finally, the italicized phrase may also be interpreted semantically. The resulting notion will be called s. f. a.+. It is closely related to the modeltheoretic notion PC introduced by Tarski [16], or rather, more strictly speaking, to PC∩ACδ.For arbitrary theories with or without identity, it is easily seen that s. f. a.+ implies f. a.+ and it is known that f. a.+ implies axiomatizability. Thus it is natural to ask under what conditions the converse implications hold, since then the notions concerned coincide and one can pass from one to the other.Kleene [9] has shown: (1) For arbitrary theories without identity, axiomatizability implies f. a.+. It also follows from his work that : (2) For theories with identity which have only infinite models, axiomatizability implies f. a.+.


2020 ◽  
pp. 108-116
Author(s):  
Paul Boghossian ◽  
Timothy Williamson

In response to Boghossian’s objections in Chapter 6, this chapter defends counterexamples offered by Paolo Casalegno and the author to an inferentialist account of what it is to understand a logical constant, on which Boghossian relied in his explanation of our entitlement to reason according to basic logical principles. The importance for understanding is stressed of non-inferential aspects of the use of logical constants, for example in the description of a perceived scene. Boghossian’s criteria for individuating concepts are also queried, as is the viability of hybrid accounts which mix inferential accounts of the use of some terms with non-inferential accounts of other terms.


Sign in / Sign up

Export Citation Format

Share Document