What is an inference rule?

1992 ◽  
Vol 57 (3) ◽  
pp. 1018-1045 ◽  
Author(s):  
Ronald Fagin ◽  
Joseph Y. Halpern ◽  
Moshe Y. Vardi

AbstractWhat is an inference rule? This question does not have a unique answer. One usually finds two distinct standard answers in the literature; validity inference (σ ⊦vφ for every substitution τ, the validity of τ[σ] entails the validity of τ[φ]), and truth inference (σ⊦l φ if for every substitution τ, the truth of τ[σ] entails the truth of τ[φ]). In this paper we introduce a general semantic framework that allows us to investigate the notion of inference more carefully. Validity inference and truth inference are in some sense the extremal points in our framework. We investigate the relationship between various types of inference in our general framework, and consider the complexity of deciding if an inference rule is sound, in the context of a number of logics of interest: classical propositional logic, a nonstandard propositional logic, various propositional modal logics, and first-order logic.

2010 ◽  
Vol 3 (2) ◽  
pp. 175-227 ◽  
Author(s):  
PETER MILNE

Various natural deduction formulations of classical, minimal, intuitionist, and intermediate propositional and first-order logics are presented and investigated with respect to satisfaction of the separation and subformula properties. The technique employed is, for the most part, semantic, based on general versions of the Lindenbaum and Lindenbaum–Henkin constructions. Careful attention is paid (i) to which properties of theories result in the presence of which rules of inference, and (ii) to restrictions on the sets of formulas to which the rules may be employed, restrictions determined by the formulas occurring as premises and conclusion of the invalid inference for which a counterexample is to be constructed. We obtain an elegant formulation of classical propositional logic with the subformula property and a singularly inelegant formulation of classical first-order logic with the subformula property, the latter, unfortunately, not a product of the strategy otherwise used throughout the article. Along the way, we arrive at an optimal strengthening of the subformula results for classical first-order logic obtained as consequences of normalization theorems by Dag Prawitz and Gunnar Stålmarck.


Author(s):  
Gabriele Pulcini

AbstractIn Schwichtenberg (Studies in logic and the foundations of mathematics, vol 90, Elsevier, pp 867–895, 1977), Schwichtenberg fine-tuned Tait’s technique (Tait in The syntax and semantics of infinitary languages, Springer, pp 204–236, 1968) so as to provide a simplified version of Gentzen’s original cut-elimination procedure for first-order classical logic (Gallier in Logic for computer science: foundations of automatic theorem proving, Courier Dover Publications, London, 2015). In this note we show that, limited to the case of classical propositional logic, the Tait–Schwichtenberg algorithm allows for a further simplification. The procedure offered here is implemented on Kleene’s sequent system G4 (Kleene in Mathematical logic, Wiley, New York, 1967; Smullyan in First-order logic, Courier corporation, London, 1995). The specific formulation of the logical rules for G4 allows us to provide bounds on the height of cut-free proofs just in terms of the logical complexity of their end-sequent.


1985 ◽  
Vol 50 (2) ◽  
pp. 451-457 ◽  
Author(s):  
Ian Mason

In this paper we investigate the first order metatheory of the classical propositional logic. In the first section we prove that the first order metatheory of the classical propositional logic is undecidable. Thus as a mathematical object even the simplest of logics is, from a logical standpoint, quite complex. In fact it is of the same complexity as true first order number theory.This result answers negatively a question of J. F. A. K. van Benthem (see [van Benthem and Doets 1983]) as to whether the interpolation theorem in some sense completes the metatheory of the calculus. Let us begin by motivating the question that we answer. In [van Benthem and Doets 1983] it is claimed that a folklore prejudice has it that interpolation was the final elementary property of first order logic to be discovered. Even though other properties of the propositional calculus have been discovered since Craig's orginal paper [Craig 1957] (see for example [Reznikoff 1965]) there is a lot of evidence for the fundamental nature of the property. In abstract model theory for example one finds that very few logics have the interpolation property. There are two well-known open problems in this area. These are1. Is there a logic satisfying the full compactness theorem as well as the interpolation theorem that is not equivalent to first order logic even for finite models?2. Is there a logic stronger than L(Q), the logic with the quantifierthere exist uncountably many, that is countably compact and has the interpolation property?


Author(s):  
Tim Button ◽  
Sean Walsh

Chapters 6-12 are driven by questions about the ability to pin down mathematical entities and to articulate mathematical concepts. This chapter is driven by similar questions about the ability to pin down the semantic frameworks of language. It transpires that there are not just non-standard models, but non-standard ways of doing model theory itself. In more detail: whilst we normally outline a two-valued semantics which makes sentences True or False in a model, the inference rules for first-order logic are compatible with a four-valued semantics; or a semantics with countably many values; or what-have-you. The appropriate level of generality here is that of a Boolean-valued model, which we introduce. And the plurality of possible semantic values gives rise to perhaps the ‘deepest’ level of indeterminacy questions: How can humans pin down the semantic framework for their languages? We consider three different ways for inferentialists to respond to this question.


2020 ◽  
Author(s):  
Michał Walicki

Abstract Graph normal form, introduced earlier for propositional logic, is shown to be a normal form also for first-order logic. It allows to view syntax of theories as digraphs, while their semantics as kernels of these digraphs. Graphs are particularly well suited for studying circularity, and we provide some general means for verifying that circular or apparently circular extensions are conservative. Traditional syntactic means of ensuring conservativity, like definitional extensions or positive occurrences guaranteeing exsitence of fixed points, emerge as special cases.


1991 ◽  
Vol 15 (2) ◽  
pp. 123-138
Author(s):  
Joachim Biskup ◽  
Bernhard Convent

In this paper the relationship between dependency theory and first-order logic is explored in order to show how relational chase procedures (i.e., algorithms to decide inference problems for dependencies) can be interpreted as clever implementations of well known refutation procedures of first-order logic with resolution and paramodulation. On the one hand this alternative interpretation provides a deeper insight into the theoretical foundations of chase procedures, whereas on the other hand it makes available an already well established theory with a great amount of known results and techniques to be used for further investigations of the inference problem for dependencies. Our presentation is a detailed and careful elaboration of an idea formerly outlined by Grant and Jacobs which up to now seems to be disregarded by the database community although it definitely deserves more attention.


Author(s):  
Jan Gorzny ◽  
Ezequiel Postan ◽  
Bruno Woltzenlogel Paleo

Abstract Proofs are a key feature of modern propositional and first-order theorem provers. Proofs generated by such tools serve as explanations for unsatisfiability of statements. However, these explanations are complicated by proofs which are not necessarily as concise as possible. There are a wide variety of compression techniques for propositional resolution proofs but fewer compression techniques for first-order resolution proofs generated by automated theorem provers. This paper describes an approach to compressing first-order logic proofs based on lifting proof compression ideas used in propositional logic to first-order logic. The first approach lifted from propositional logic delays resolution with unit clauses, which are clauses that have a single literal. The second approach is partial regularization, which removes an inference $\eta $ when it is redundant in the sense that its pivot literal already occurs as the pivot of another inference in every path from $\eta $ to the root of the proof. This paper describes the generalization of the algorithms LowerUnits and RecyclePivotsWithIntersection (Fontaine et al.. Compression of propositional resolution proofs via partial regularization. In Automated Deduction—CADE-23—23rd International Conference on Automated Deduction, Wroclaw, Poland, July 31–August 5, 2011, p. 237--251. Springer, 2011) from propositional logic to first-order logic. The generalized algorithms compresses resolution proofs containing resolution and factoring inferences with unification. An empirical evaluation of these approaches is included.


1999 ◽  
Vol 64 (4) ◽  
pp. 1751-1773 ◽  
Author(s):  
Lauri Hella ◽  
Leonid Libkin ◽  
Juha Nurmonen

AbstractMany known tools for proving expressibility bounds for first-ordér logic are based on one of several locality properties. In this paper we characterize the relationship between those notions of locality. We note that Gaifman's locality theorem gives rise to two notions: one deals with sentences and one with open formulae. We prove that the former implies Hanf's notion of locality, which in turn implies Gaifman's locality for open formulae. Each of these implies the bounded degree property, which is one of the easiest tools for proving expressibility bounds. These results apply beyond the first-order case. We use them to derive expressibility bounds for first-order logic with unary quantifiers and counting. We also characterize the notions of locality on structures of small degree.


Author(s):  
VLADIMIR LIFSCHITZ

Abstarct In the theory of answer set programming, two groups of rules are called strongly equivalent if, informally speaking, they have the same meaning in any context. The relationship between strong equivalence and the propositional logic of here-and-there allows us to establish strong equivalence by deriving rules of each group from rules of the other. In the process, rules are rewritten as propositional formulas. We extend this method of proving strong equivalence to an answer set programming language that includes operations on integers. The formula representing a rule in this language is a first-order formula that may contain comparison symbols among its predicate constants, and symbols for arithmetic operations among its function constants. The paper is under consideration for acceptance in TPLP.


Author(s):  
Shawn Hedman

The ability to reason and think in a logical manner forms the basis of learning for most mathematics, computer science, philosophy and logic students. Based on the author's teaching notes at the University of Maryland and aimed at a broad audience, this text covers the fundamental topics in classical logic in an extremely clear, thorough and accurate style that is accessible to all the above. Covering propositional logic, first-order logic, and second-order logic, as well as proof theory, computability theory, and model theory, the text also contains numerous carefully graded exercises and is ideal for a first or refresher course.


Sign in / Sign up

Export Citation Format

Share Document