scholarly journals Pretopologies and completeness proofs

1995 ◽  
Vol 60 (3) ◽  
pp. 861-878 ◽  
Author(s):  
Giovanni Sambin

Pretopologies were introduced in [S], and there shown to give a complete semantics for a propositional sequent calculus BL, here called basic linear logic, as well as for its extensions by structural rules, ex falso quodlibet or double negation. Immediately after Logic Colloquium '88, a conversation with Per Martin-Löf helped me to see how the pretopology semantics should be extended to predicate logic; the result now is a simple and fully constructive completeness proof for first order BL and virtually all its extensions, including the usual, or structured, intuitionistic and classical logic. Such a proof clearly illustrates the fact that stronger set-theoretic principles and classical metalogic are necessary only when completeness is sought with respect to a special class of models, such as the usual two-valued models.To make the paper self-contained, I briefly review in §1 the definition of pretopologies; §2 deals with syntax and §3 with semantics. The completeness proof in §4, though similar in structure, is sensibly simpler than that in [S], and this is why it is given in detail. In §5 it is shown how little is needed to obtain completeness for extensions of BL in the same language. Finally, in §6 connections with proofs with respect to more traditional semantics are briefly investigated, and some open problems are put forward.

1997 ◽  
Vol 62 (3) ◽  
pp. 755-807 ◽  
Author(s):  
Vincent Danos ◽  
Jean-Baptiste Joinet ◽  
Harold Schellinx

AbstractThe main concern of this paper is the design of a noetherian and confluent normalization for LK2 (that is, classical second order predicate logic presented as a sequent calculus).The method we present is powerful: since it allows us to recover as fragments formalisms as seemingly different as Girard's LC and Parigot's λμ, FD ([10, 12, 32, 36]), delineates other viable systems as well, and gives means to extend the Krivine/Leivant paradigm of ‘programming-with-proofs’ ([26, 27]) to classical logic; it is painless: since we reduce strong normalization and confluence to the same properties for linear logic (for non-additive proof nets, to be precise) using appropriate embeddings (so-called decorations); it is unifying: it organizes known solutions in a simple pattern that makes apparent the how and why of their making.A comparison of our method to that of embedding LK into LJ (intuitionistic sequent calculus) brings to the fore the latter's defects for these ‘deconstructive purposes’.


2018 ◽  
Vol 29 (8) ◽  
pp. 1177-1216
Author(s):  
CHUCK LIANG

This article presents a unified logic that combines classical logic, intuitionistic logic and affine linear logic (restricting contraction but not weakening). We show that this unification can be achieved semantically, syntactically and in the computational interpretation of proofs. It extends our previous work in combining classical and intuitionistic logics. Compared to linear logic, classical fragments of proofs are better isolated from non-classical fragments. We define a phase semantics for this logic that naturally extends the Kripke semantics of intuitionistic logic. We present a sequent calculus with novel structural rules, which entail a more elaborate procedure for cut elimination. Computationally, this system allows affine-linear interpretations of proofs to be combined with classical interpretations, such as the λμ calculus. We show how cut elimination must respect the boundaries between classical and non-classical modes of proof that correspond to delimited control effects.


2002 ◽  
Vol 67 (1) ◽  
pp. 162-196 ◽  
Author(s):  
Jean-Baptiste Joinet ◽  
Harold Schellinx ◽  
Lorenzo Tortora De Falco

AbstractThe present report is a, somewhat lengthy, addendum to [4], where the elimination of cuts from derivations in sequent calculus for classical logic was studied ‘from the point of view of linear logic’. To that purpose a formulation of classical logic was used, that - as in linear logic - distinguishes between multiplicative and additive versions of the binary connectives.The main novelty here is the observation that this type-distinction is not essential: we can allow classical sequent derivations to use any combination of additive and multiplicative introduction rules for each of the connectives, and still have strong normalization and confluence of tq-reductions.We give a detailed description of the simulation of tq-reductions by means of reductions of the interpretation of any given classical proof as a proof net of PN2 (the system of second order proof nets for linear logic), in which moreover all connectives can be taken to be of one type, e.g., multiplicative.We finally observe that dynamically the different logical cuts, as determined by the four possible combinations of introduction rules, are independent: it is not possible to simulate them internally, i.e.. by only one specific combination, and structural rules.


2020 ◽  
Vol 30 (1) ◽  
pp. 157-174 ◽  
Author(s):  
Harley Eades III ◽  
Valeria de Paiva

Abstract Full intuitionistic linear logic (FILL) was first introduced by Hyland and de Paiva, and went against current beliefs that it was not possible to incorporate all of the linear connectives, e.g. tensor, par and implication, into an intuitionistic linear logic. Bierman showed that their formalization of FILL did not enjoy cut elimination as such, but Bellin proposed a small change to the definition of FILL regaining cut elimination and using proof nets. In this note we adopt Bellin’s proposed change and give a direct proof of cut elimination for the sequent calculus. Then we show that a categorical model of FILL in the basic dialectica category is also a linear/non-linear model of Benton and a full tensor model of Melliès’ and Tabareau’s tensorial logic. We give a double-negation translation of linear logic into FILL that explicitly uses par in addition to tensor. Lastly, we introduce a new library to be used in the proof assistant Agda for proving properties of dialectica categories.


Author(s):  
Scott C. Chase

AbstractThe combination of the paradigms of shape algebras and predicate logic representations, used in a new method for describing designs, is presented. First-order predicate logic provides a natural, intuitive way of representing shapes and spatial relations in the development of complete computer systems for reasoning about designs. Shape algebraic formalisms have advantages over more traditional representations of geometric objects. Here we illustrate the definition of a large set of high-level design relations from a small set of simple structures and spatial relations, with examples from the domains of geographic information systems and architecture.


Author(s):  
Lew Gordeev ◽  
Edward Hermann Haeusler

We upgrade [3] to a complete proof of the conjecture NP = PSPACE that is known as one of the fundamental open problems in the mathematical theory of computational complexity; this proof is based on [2]. Since minimal propositional logic is known to be PSPACE complete, while PSPACE to include NP, it suffices to show that every valid purely implicational formula ρ has a proof whose weight (= total number of symbols) and time complexity of the provability involved are both polynomial in the weight of ρ. As in [3], we use proof theoretic approach. Recall that in [3] we considered any valid ρ in question that had (by the definition of validity) a "short" tree-like proof π in the Hudelmaier-style cutfree sequent calculus for minimal logic. The "shortness" means that the height of π and the total weight of different formulas occurring in it are both polynomial in the weight of ρ. However, the size (= total number of nodes), and hence also the weight, of π could be exponential in that of ρ. To overcome this trouble we embedded π into Prawitz's proof system of natural deductions containing single formulas, instead of sequents. As in π, the height and the total weight of different formulas of the resulting tree-like natural deduction ∂1 were polynomial, although the size of ∂1 still could be exponential, in the weight of ρ. In our next, crucial move, ∂1 was deterministically compressed into a "small", although multipremise, dag-like deduction ∂ whose horizontal levels contained only mutually different formulas, which made the whole weight polynomial in that of ρ. However, ∂ required a more complicated verification of the underlying provability of ρ. In this paper we present a nondeterministic compression of ∂ into a desired standard dag-like deduction ∂0 that deterministically proves ρ in time and space polynomial in the weight of ρ. Together with [3] this completes the proof of NP = PSPACE. Natural deductions are essential for our proof. Tree-to-dag horizontal compression of π merging equal sequents, instead of formulas, is (possible but) not sufficient, since the total number of different sequents in π might be exponential in the weight of ρ − even assuming that all formulas occurring in sequents are subformulas of ρ. On the other hand, we need Hudelmaier's cutfree sequent calculus in order to control both the height and total weight of different formulas of the initial tree-like proof π, since standard Prawitz's normalization although providing natural deductions with the subformula property does not preserve polynomial heights. It is not clear yet if we can omit references to π even in the proof of the weaker result NP = coNP.


Author(s):  
G.M. Bierman

Linear logic was introduced by Jean-Yves Girard in 1987. Like classical logic it satisfies the law of the excluded middle and the principle of double negation, but, unlike classical logic, it has non-degenerate models. Models of logics are often given only at the level of provability, in that they provide denotations of formulas. However, we are also interested in models which provide denotations of deductions, or proofs. Given such a model two proofs are said to be equivalent if their denotations are equal. A model is said to be ‘degenerate’ if there are no formulas for which there exist at least two non-equivalent proofs. It is easy to see that models of classical logic are essentially degenerate because any formula is either true or false and so all proofs of a formula are considered equivalent. The intuitionist approach to this problem involves altering the meaning of the logical connectives but linear logic attacks the very connectives themselves, replacing them with more refined ones. Despite this there are simple translations between classical and linear logic. One can see the need for such a refinement in another way. Both classical and intuitionistic logics could be said to deal with static truths; both validate the rule of modus ponens: if A→B and A, then B; but both also validate the rule if A→B and A, then A∧B. In mathematics this is correct since a proposition, once verified, remains true – it persists. Many situations do not reflect such persistence but rather have an additional notion of causality. An implication A→B should reflect that a state B is accessible from a state A and, moreover, that state A is no longer available once the transition has been made. An example of this phenomenon is in chemistry where an implication A→B represents a reaction of components A to yield B. Thus if two hydrogen and one oxygen atoms bond to form a water molecule, they are consumed in the process and are no longer part of the current state. Linear logic provides logical connectives to describe such refined interpretations.


10.29007/p1fd ◽  
2018 ◽  
Author(s):  
Ozan Kahramanogullari

The deep inference presentation of multiplicative exponential linear logic (MELL) benefits from a rich combinatoric analysis with many more proofs in comparison to its sequent calculus presentation. In the deep inference setting, all the sequent calculus proofs are preserved. Moreover, many other proofs become available, and some of these proofs are much shorter. However, proof search in deep inference is subject to a greater nondeterminism, and this nondeterminism constitutes a bottleneck for applications. To this end, we address the problem of reducing nondeterminism in MELL by refining and extending our technique that has been previously applied to multiplicative linear logic and classical logic. We show that, besides the nondeterminism in commutative contexts, the nondeterminism in exponential contexts can be reduced in a proof theoretically clean manner. The method conserves the exponential speed-up in proof construction due to deep inference, exemplified by Statman tautologies. We validate the improvement in accessing the shorter proofs by experiments with our implementations.


2020 ◽  
Vol 23 (2) ◽  
pp. 447-473
Author(s):  
Ralf Busse

Abstract This paper develops a valid reconstruction in first-order predicate logic of Leibniz’s argument for his complete concept definition of substance in §8 of the Discours de Métaphysique. Following G. Rodriguez-Pereyra, it construes the argument as resting on two substantial premises, the “merely verbal” Aristotelian definition and Leibniz’s concept containment theory of truth, and it understands the resulting “real” definition as saying not that an entity is a substance iff its complete concept contains every predicate of that entity, but iff its complete concept contains every predicate of any subject to which that concept is truly attributable. An account is suggested of why Leibniz criticises the Aristotelian definition as merely nominal and how he takes his own definition to overcome this shortcoming: while on the Aristotelian basis the predication relation could generate endless chains, so that substances as endpoints of predication would be impossible, Leibniz’s definition reveals lowest species as such endpoints, which he therefore identifies with individual substances. Since duplicate lowest species make no sense, the Identity of Indiscernibles for substances follows. The reading suggests a Platonist interpretation according to which substances do not so much have but are individual essences, natures or forms.


Sign in / Sign up

Export Citation Format

Share Document