The metatheory of the classical propositional calculus is not axiomatizable

1985 ◽  
Vol 50 (2) ◽  
pp. 451-457 ◽  
Author(s):  
Ian Mason

In this paper we investigate the first order metatheory of the classical propositional logic. In the first section we prove that the first order metatheory of the classical propositional logic is undecidable. Thus as a mathematical object even the simplest of logics is, from a logical standpoint, quite complex. In fact it is of the same complexity as true first order number theory.This result answers negatively a question of J. F. A. K. van Benthem (see [van Benthem and Doets 1983]) as to whether the interpolation theorem in some sense completes the metatheory of the calculus. Let us begin by motivating the question that we answer. In [van Benthem and Doets 1983] it is claimed that a folklore prejudice has it that interpolation was the final elementary property of first order logic to be discovered. Even though other properties of the propositional calculus have been discovered since Craig's orginal paper [Craig 1957] (see for example [Reznikoff 1965]) there is a lot of evidence for the fundamental nature of the property. In abstract model theory for example one finds that very few logics have the interpolation property. There are two well-known open problems in this area. These are1. Is there a logic satisfying the full compactness theorem as well as the interpolation theorem that is not equivalent to first order logic even for finite models?2. Is there a logic stronger than L(Q), the logic with the quantifierthere exist uncountably many, that is countably compact and has the interpolation property?

2010 ◽  
Vol 3 (2) ◽  
pp. 175-227 ◽  
Author(s):  
PETER MILNE

Various natural deduction formulations of classical, minimal, intuitionist, and intermediate propositional and first-order logics are presented and investigated with respect to satisfaction of the separation and subformula properties. The technique employed is, for the most part, semantic, based on general versions of the Lindenbaum and Lindenbaum–Henkin constructions. Careful attention is paid (i) to which properties of theories result in the presence of which rules of inference, and (ii) to restrictions on the sets of formulas to which the rules may be employed, restrictions determined by the formulas occurring as premises and conclusion of the invalid inference for which a counterexample is to be constructed. We obtain an elegant formulation of classical propositional logic with the subformula property and a singularly inelegant formulation of classical first-order logic with the subformula property, the latter, unfortunately, not a product of the strategy otherwise used throughout the article. Along the way, we arrive at an optimal strengthening of the subformula results for classical first-order logic obtained as consequences of normalization theorems by Dag Prawitz and Gunnar Stålmarck.


2020 ◽  
Author(s):  
Michał Walicki

Abstract Graph normal form, introduced earlier for propositional logic, is shown to be a normal form also for first-order logic. It allows to view syntax of theories as digraphs, while their semantics as kernels of these digraphs. Graphs are particularly well suited for studying circularity, and we provide some general means for verifying that circular or apparently circular extensions are conservative. Traditional syntactic means of ensuring conservativity, like definitional extensions or positive occurrences guaranteeing exsitence of fixed points, emerge as special cases.


10.29007/bmlf ◽  
2018 ◽  
Author(s):  
Matthias Baaz ◽  
Anela Lolic

First-order interpolation properties are notoriously hard to determine, even for logics where propositional interpolation is more or less obvious. One of the most prominent examples is first-order G ̈odel logic. Lyndon interpolation is a strengthening of the interpolation property in the sense that propositional variables or predicate symbols are only allowed to occur positively (negatively) in the interpolant if they occur positively (negatively) on both sides of the implication. Note that Lyndon interpolation is difficult to establish for first-order logics as most proof-theoretic methods fail. In this paper we provide general derivability conditions for a first-order logic to admit Lyndon interpolation for the prenex ⊃ prenex fragment and apply the arguments to the prenex ⊃ prenex fragment of first-order Go ̈del logic.


1992 ◽  
Vol 57 (1) ◽  
pp. 33-52 ◽  
Author(s):  
Andrew M. Pitts

AbstractWe prove the following surprising property of Heyting's intuitionistic propositional calculus, IpC. Consider the collection of formulas, ϕ, built up from propositional variables (p, q, r, …) and falsity (⊥) using conjunction (∧), disjunction (∨) and implication (→). Write ⊢ϕ to indicate that such a formula is intuitionistically valid. We show that for each variable p and formula ϕ there exists a formula Apϕ (effectively computable from ϕ), containing only variables not equal to p which occur in ϕ, and such that for all formulas ψ not involving p, ⊢ψ → Apϕ if and only if ⊢ψ → ϕ. Consequently quantification over propositional variables can be modelled in IpC, and there is an interpretation of the second order propositional calculus, IpC2, in IpC which restricts to the identity on first order propositions.An immediate corollary is the strengthening of the usual interpolation theorem for IpC to the statement that there are least and greatest interpolant formulas for any given pair of formulas. The result also has a number of interesting consequences for the algebraic counterpart of IpC, the theory of Heyting algebras. In particular we show that a model of IpC2 can be constructed whose algebra of truth-values is equal to any given Heyting algebra.


1988 ◽  
Vol 53 (2) ◽  
pp. 554-570 ◽  
Author(s):  
Kosta Došen ◽  
Peter Schroeder-Heister

This paper is meant to be a comment on Beth's definability theorem. In it we shall make the following points.Implicit definability as mentioned in Beth's theorem for first-order logic is a special case of a more general notion of uniqueness. If α is a nonlogical constant, Tα a set of sentences, α* an additional constant of the same syntactical category as α and Tα, a copy of Tα with α* instead of α, then for implicit definability of α in Tα one has, in the case of predicate constants, to derive α(x1,…,xn) ↔ α*(x1,…,xn) from Tα ∪ Tα*, and similarly for constants of other syntactical categories. For uniqueness one considers sets of schemata Sα and derivability from instances of Sα ∪ Sα* in the language with both α and α*, thus allowing mixing of α and α* not only in logical axioms and rules, but also in nonlogical assumptions. In the first case, but not necessarily in the second one, explicit definability follows. It is crucial for Beth's theorem that mixing of α and α* is allowed only inside logic, not outside. This topic will be treated in §1.Let the structural part of logic be understood roughly in the sense of Gentzen-style proof theory, i.e. as comprising only those rules which do not specifically involve logical constants. If we restrict mixing of α and α* to the structural part of logic which we shall specify precisely, we obtain a different notion of implicit definability for which we can demonstrate a general definability theorem, where a is not confined to the syntactical categories of nonlogical expressions of first-order logic. This definability theorem is a consequence of an equally general interpolation theorem. This topic will be treated in §§2, 3, and 4.


Author(s):  
Jan Gorzny ◽  
Ezequiel Postan ◽  
Bruno Woltzenlogel Paleo

Abstract Proofs are a key feature of modern propositional and first-order theorem provers. Proofs generated by such tools serve as explanations for unsatisfiability of statements. However, these explanations are complicated by proofs which are not necessarily as concise as possible. There are a wide variety of compression techniques for propositional resolution proofs but fewer compression techniques for first-order resolution proofs generated by automated theorem provers. This paper describes an approach to compressing first-order logic proofs based on lifting proof compression ideas used in propositional logic to first-order logic. The first approach lifted from propositional logic delays resolution with unit clauses, which are clauses that have a single literal. The second approach is partial regularization, which removes an inference $\eta $ when it is redundant in the sense that its pivot literal already occurs as the pivot of another inference in every path from $\eta $ to the root of the proof. This paper describes the generalization of the algorithms LowerUnits and RecyclePivotsWithIntersection (Fontaine et al.. Compression of propositional resolution proofs via partial regularization. In Automated Deduction—CADE-23—23rd International Conference on Automated Deduction, Wroclaw, Poland, July 31–August 5, 2011, p. 237--251. Springer, 2011) from propositional logic to first-order logic. The generalized algorithms compresses resolution proofs containing resolution and factoring inferences with unification. An empirical evaluation of these approaches is included.


2004 ◽  
Vol 10 (1) ◽  
pp. 37-53 ◽  
Author(s):  
Jouko Väänänen

§1. Introduction. After the pioneering work of Mostowski [29] and Lindström [23] it was Jon Barwise's papers [2] and [3] that brought abstract model theory and generalized quantifiers to the attention of logicians in the early seventies. These papers were greeted with enthusiasm at the prospect that model theory could be developed by introducing a multitude of extensions of first order logic, and by proving abstract results about relationships holding between properties of these logics. Examples of such properties areκ-compactness. Any set of sentences of cardinality ≤ κ, every finite subset of which has a model, has itself a model. Löwenheim-Skolem Theorem down to κ. If a sentence of the logic has a model, it has a model of cardinality at most κ. Interpolation Property. If ϕ and ψ are sentences such that ⊨ ϕ → Ψ, then there is θ such that ⊨ ϕ → θ, ⊨ θ → Ψ and the vocabulary of θ is the intersection of the vocabularies of ϕ and Ψ.Lindstrom's famous theorem characterized first order logic as the maximal ℵ0-compact logic with Downward Löwenheim-Skolem Theorem down to ℵ0. With his new concept of absolute logics Barwise was able to get similar characterizations of infinitary languages Lκω. But hopes were quickly frustrated by difficulties arising left and right, and other areas of model theory came into focus, mainly stability theory. No new characterizations of logics comparable to the early characterization of first order logic given by Lindström and of infinitary logic by Barwise emerged. What was first called soft model theory turned out to be as hard as hard model theory.


2000 ◽  
Vol 6 (4) ◽  
pp. 447-462 ◽  
Author(s):  
Martin Otto

AbstractLyndon's Interpolation Theorem asserts that for any valid implication between two purely relational sentences of first-order logic, there is an interpolant in which each relation symbol appears positively (negatively) only if it appears positively (negatively) in both the antecedent and the succedent of the given implication. We prove a similar, more general interpolation result with the additional requirement that, for some fixed tuple of unary predicates U, all formulae under consideration have all quantifiers explicitly relativised to one of the U. Under this stipulation, existential (universal) quantification over U contributes a positive (negative) occurrence of U.It is shown how this single new interpolation theorem, obtained by a canonical and rather elementary model theoretic proof, unifies a number of related results: the classical characterisation theorems concerning extensions (substructures) with those concerning monotonicity, as well as a many-sorted interpolation theorem focusing on positive vs. negative occurrences of predicates and on existentially vs. universally quantified sorts.


Author(s):  
Timothy Smiley

The predicate calculus is the dominant system of modern logic, having displaced the traditional Aristotelian syllogistic logic that had been the previous paradigm. Like Aristotle’s, it is a logic of quantifiers – words like ‘every’, ‘some’ and ‘no’ that are used to express that a predicate applies universally or with some other distinctive kind of generality, for example ‘everyone is mortal’, ‘someone is mortal’, ‘no one is mortal’. The weakness of syllogistic logic was its inability to represent the structure of complex predicates. Thus it could not cope with argument patterns like ‘everything Fs and Gs, so everything Fs’. Nor could it cope with relations, because a logic of relations must be able to analyse cases where a quantifier is applied to a predicate that already contains one, as in ‘someone loves everyone’. Remedying the weakness required two major innovations. One was a logic of connectives – words like ‘and’, ‘or’ and ‘if’ that form complex sentences out of simpler ones. It is often studied as a distinct system: the propositional calculus. A proposition here is a true-or-false sentence and the guiding principle of propositional calculus is truth-functionality, meaning that the truth-value (truth or falsity) of a compound proposition is uniquely determined by the truth-values of its components. Its principal connectives are negation, conjunction, disjunction and a ‘material’ (that is, truth-functional) conditional. Truth-functionality makes it possible to compute the truth-values of propositions of arbitrary complexity in terms of their basic propositional constituents, and so develop the logic of tautology and tautological consequence (logical truth and consequence in virtue of the connectives). The other invention was the quantifier-variable notation. Variables are letters used to indicate things in an unspecific way; thus ‘x is mortal’ is read as predicating of an unspecified thing x what ‘Socrates is mortal’ predicates of Socrates. The connectives can now be used to form complex predicates as well as propositions, for example ‘x is human and x is mortal’; while different variables can be used in different places to express relational predicates, for example ‘x loves y’. The quantifier goes in front of the predicate it governs, with the relevant variable repeated beside it to indicate which positions are being generalized. These radical departures from the idiom of quantification in natural languages are needed to solve the further problem of ambiguity of scope. Compare, for example, the ambiguity of ‘someone loves everyone’ with the unambiguous alternative renderings ‘there is an x such that for every y, x loves y’ and ‘for every y, there is an x such that x loves y’. The result is a pattern of formal language based on a non-logical vocabulary of names of things and primitive predicates expressing properties and relations of things. The logical constants are the truth-functional connectives and the universal and existential quantifiers, plus a stock of variables construed as ranging over things. This is ‘the’ predicate calculus. A common option is to add the identity sign as a further logical constant, producing the predicate calculus with identity. The first modern logic of quantification, Frege’s of 1879, was designed to express generalizations not only about individual things but also about properties of individuals. It would nowadays be classified as a second-order logic, to distinguish it from the first-order logic described above. Second-order logic is much richer in expressive power than first-order logic, but at a price: first-order logic can be axiomatized, second-order logic cannot.


Sign in / Sign up

Export Citation Format

Share Document