Multiple conclusion linear logic: cut elimination and more

2020 ◽  
Vol 30 (1) ◽  
pp. 157-174 ◽  
Author(s):  
Harley Eades III ◽  
Valeria de Paiva

Abstract Full intuitionistic linear logic (FILL) was first introduced by Hyland and de Paiva, and went against current beliefs that it was not possible to incorporate all of the linear connectives, e.g. tensor, par and implication, into an intuitionistic linear logic. Bierman showed that their formalization of FILL did not enjoy cut elimination as such, but Bellin proposed a small change to the definition of FILL regaining cut elimination and using proof nets. In this note we adopt Bellin’s proposed change and give a direct proof of cut elimination for the sequent calculus. Then we show that a categorical model of FILL in the basic dialectica category is also a linear/non-linear model of Benton and a full tensor model of Melliès’ and Tabareau’s tensorial logic. We give a double-negation translation of linear logic into FILL that explicitly uses par in addition to tensor. Lastly, we introduce a new library to be used in the proof assistant Agda for proving properties of dialectica categories.

1995 ◽  
Vol 60 (3) ◽  
pp. 861-878 ◽  
Author(s):  
Giovanni Sambin

Pretopologies were introduced in [S], and there shown to give a complete semantics for a propositional sequent calculus BL, here called basic linear logic, as well as for its extensions by structural rules, ex falso quodlibet or double negation. Immediately after Logic Colloquium '88, a conversation with Per Martin-Löf helped me to see how the pretopology semantics should be extended to predicate logic; the result now is a simple and fully constructive completeness proof for first order BL and virtually all its extensions, including the usual, or structured, intuitionistic and classical logic. Such a proof clearly illustrates the fact that stronger set-theoretic principles and classical metalogic are necessary only when completeness is sought with respect to a special class of models, such as the usual two-valued models.To make the paper self-contained, I briefly review in §1 the definition of pretopologies; §2 deals with syntax and §3 with semantics. The completeness proof in §4, though similar in structure, is sensibly simpler than that in [S], and this is why it is given in detail. In §5 it is shown how little is needed to obtain completeness for extensions of BL in the same language. Finally, in §6 connections with proofs with respect to more traditional semantics are briefly investigated, and some open problems are put forward.


2007 ◽  
Vol 17 (3) ◽  
pp. 485-526 ◽  
Author(s):  
HERMAN GEUVERS ◽  
IRIS LOEB

In this paper, we introduce the formalism of deduction graphs as a generalisation of both Gentzen–Prawitz style natural deduction and Fitch style flag deduction. The advantage of this formalism is that, as with flag deductions (but not natural deduction), subproofs can be shared, but the linearisation used in flag deductions is avoided. Our deduction graphs have both nodes and boxes, which are collections of nodes that also form a node themselves. This is reminiscent of the bigraphs of Milner, where the link graph describes the nodes and edges and the place graph describes the nesting of nodes. We give a precise definition of deduction graphs, together with some illustrative examples. Furthermore, we analyse their computational behaviour by studying the process of cut-elimination and by defining translations from deduction graphs to simply typed lambda terms. From a slight variation of this translation, we conclude that the process of cut-elimination is strongly normalising. The translation to simple type theory removes quite a lot of structure, so we also propose a translation to a context calculus with lets that faithfully captures the structure of deduction graphs. The proof nets of linear logic also offer a graph-like presentation of natural deduction, and we point out some similarities between the two formalisms.


2014 ◽  
Vol 26 (3) ◽  
pp. 367-423 ◽  
Author(s):  
LUÍS CAIRES ◽  
FRANK PFENNING ◽  
BERNARDO TONINHO

Throughout the years, several typing disciplines for the π-calculus have been proposed. Arguably, the most widespread of these typing disciplines consists of session types. Session types describe the input/output behaviour of processes and traditionally provide strong guarantees about this behaviour (i.e. deadlock-freedom and fidelity). While these systems exploit a fundamental notion of linearity, the precise connection between linear logic and session types has not been well understood.This paper proposes a type system for the π-calculus that corresponds to a standard sequent calculus presentation of intuitionistic linear logic, interpreting linear propositions as session types and thus providing a purely logical account of all key features and properties of session types. We show the deep correspondence between linear logic and session types by exhibiting a tight operational correspondence between cut-elimination steps and process reductions. We also discuss an alternative presentation of linear session types based on classical linear logic, and compare our development with other more traditional session type systems.


2008 ◽  
Vol 14 (2) ◽  
pp. 240-257 ◽  
Author(s):  
Jan von Plato

AbstractGentzen writes in the published version of his doctoral thesis Untersuchungen über das logische Schliessen (Investigations into logical reasoning) that he was able to prove the normalization theorem only for intuitionistic natural deduction, but not for classical. To cover the latter, he developed classical sequent calculus and proved a corresponding theorem, the famous cut elimination result. Its proof was organized so that a cut elimination result for an intuitionistic sequent calculus came out as a special case, namely the one in which the sequents have at most one formula in the right, succedent part. Thus, there was no need for a direct proof of normalization for intuitionistic natural deduction. The only traces of such a proof in the published thesis are some convertibilities, such as when an implication introduction is followed by an implication elimination [1934–35, II.5.13]. It remained to Dag Prawitz in 1965 to work out a proof of normalization. Another, less known proof was given also in 1965 by Andres Raggio.We found in February 2005 an early handwritten version of Gentzen's thesis, with exactly the above title, but with rather different contents: Most remarkably, it contains a detailed proof of normalization for what became the standard system of natural deduction. The manuscript is located in the Paul Bernays collection at the ETH-Zurichwith the signum Hs. 974: 271. Bernays must have gotten it well before the time of his being expelled from Göttingen on the basis of the racial laws in April 1933.


2020 ◽  
Vol 30 (1) ◽  
pp. 239-256 ◽  
Author(s):  
Max Kanovich ◽  
Stepan Kuznetsov ◽  
Andre Scedrov

Abstract The Lambek calculus can be considered as a version of non-commutative intuitionistic linear logic. One of the interesting features of the Lambek calculus is the so-called ‘Lambek’s restriction’, i.e. the antecedent of any provable sequent should be non-empty. In this paper, we discuss ways of extending the Lambek calculus with the linear logic exponential modality while keeping Lambek’s restriction. Interestingly enough, we show that for any system equipped with a reasonable exponential modality the following holds: if the system enjoys cut elimination and substitution to the full extent, then the system necessarily violates Lambek’s restriction. Nevertheless, we show that two of the three conditions can be implemented. Namely, we design a system with Lambek’s restriction and cut elimination and another system with Lambek’s restriction and substitution. For both calculi, we prove that they are undecidable, even if we take only one of the two divisions provided by the Lambek calculus. The system with cut elimination and substitution and without Lambek’s restriction is folklore and known to be undecidable.


2012 ◽  
Vol 23 (1) ◽  
pp. 38-144 ◽  
Author(s):  
FRANÇOIS POTTIER

AbstractThis paper presents a formal definition and machine-checked soundness proof for a very expressive type-and-capability system, that is, a low-level type system that keeps precise track of ownership and side effects. The programming language has first-class functions and references. The type system's features include the following: universal, existential, and recursive types; subtyping; a distinction between affine and unrestricted data; support for strong updates; support for naming values and heap fragments via singleton and group regions; a distinction between ordinary values (which exist at runtime) and capabilities (which do not); support for dynamic reorganizations of the ownership hierarchy by disassembling and reassembling capabilities; and support for temporarily or permanently hiding a capability via frame and anti-frame rules. One contribution of the paper is the definition of the type-and-capability system itself. We present the system as modularly as possible. In particular, at the core of the system, the treatment of affinity, in the style of dual intuitionistic linear logic, is formulated in terms of an arbitrarymonotonic separation algebra, a novel axiomatization of resources, ownership, and the manner in which they evolve with time. Only the peripheral layers of the system are aware that we are dealing with a specific monotonic separation algebra, whose resources are references and regions. This semi-abstract organization should facilitate further extensions of the system with new forms of resources. The other main contribution is a machine-checked proof of type soundness. The proof is carried out in the Wright and Felleisen's syntactic style. This offers an evidence that this relatively simple-minded proof technique can scale up to systems of this complexity, and constitutes a viable alternative to more sophisticated semantic proof techniques. We do not claim that the syntactic technique is superior: We simply illustrate how it is used and highlight its strengths and shortcomings.


2007 ◽  
Vol 17 (5) ◽  
pp. 957-1027 ◽  
Author(s):  
CARSTEN FÜHRMANN ◽  
DAVID PYM

It is well known that weakening and contraction cause naive categorical models of the classical sequent calculus to collapse to Boolean lattices. In previous work, summarised briefly herein, we have provided a class of models calledclassical categoriesthat is sound and complete and avoids this collapse by interpreting cut reduction by a poset enrichment. Examples of classical categories include boolean lattices and the category of sets and relations, where both conjunction and disjunction are modelled by the set-theoretic product. In this article, which is self-contained, we present an improved axiomatisation of classical categories, together with a deep exploration of their structural theory. Observing that the collapse already happens in the absence of negation, we start with negation-free models calledDummett categories. Examples of these include, besides the classical categories mentioned above, the category of sets and relations, where both conjunction and disjunction are modelled by the disjoint union. We prove that Dummett categories are MIX, and that the partial order can be derived from hom-semilattices, which have a straightforward proof-theoretic definition. Moreover, we show that the Geometry-of-Interaction construction can be extended from multiplicative linear logic to classical logic by applying it to obtain a classical category from a Dummett category.Along the way, we gain detailed insights into the changes that proofs undergo during cut elimination in the presence of weakening and contraction.


2012 ◽  
Vol 22 (3) ◽  
pp. 409-449 ◽  
Author(s):  
SATOSHI MATSUOKA

In this paper we propose a novel approach for analysing proof nets of Multiplicative Linear Logic (MLL) using coding theory. We define families of proof structures called PS-families and introduce a metric space for each family. In each family: (1)an MLL proof net is a true code element; and(2)a proof structure that is not an MLL proof net is a false (or corrupted) code element. The definition of our metrics elegantly reflects the duality of the multiplicative connectives. We show that in our framework one-error-detection is always possible but one-error-correction is always impossible. We also demonstrate the importance of our main result by presenting two proof-net enumeration algorithms for a given PS-family: the first searches proof nets naively and exhaustively without help from our main result, while the second uses our main result to carry out an intelligent search. In some cases, the first algorithm visits proof structures exponentially, while the second does so only polynomially.


2000 ◽  
Vol 65 (3) ◽  
pp. 979-1013 ◽  
Author(s):  
Giovanni Sambin ◽  
Giulia Battilotti ◽  
Claudia Faggian

AbstractWe introduce a sequent calculus B for a new logic, named basic logic. The aim of basic logic is to find a structure in the space of logics. Classical, intuitionistic. quantum and non-modal linear logics, are all obtained as extensions in a uniform way and in a single framework. We isolate three properties, which characterize B positively: reflection, symmetry and visibility.A logical constant obeys to the principle of reflection if it is characterized semantically by an equation binding it with a metalinguistic link between assertions, and if its syntactic inference rules are obtained by solving that equation. All connectives of basic logic satisfy reflection.To the control of weakening and contraction of linear logic, basic logic adds a strict control of contexts, by requiring that all active formulae in all rules are isolated, that is visible. From visibility, cut-elimination follows. The full, geometric symmetry of basic logic induces known symmetries of its extensions, and adds a symmetry among them, producing the structure of a cube.


2007 ◽  
Vol 17 (2) ◽  
pp. 341-359 ◽  
Author(s):  
MICHELE PAGANI

We study full completeness and syntactical separability of MLL proof nets with the mix rule. The general method we use consists of first addressing these two questions in the less restrictive framework of proof structures, and then adapting the results to proof nets.At the level of proof structures, we find a semantical characterisation of their interpretations in relational semantics, and define an observational equivalence that is proved to be the equivalence induced by cut elimination. Hence, we obtain a semantical characterisation (in coherent spaces) and an observational equivalence for the proof nets with the mix rule.


Sign in / Sign up

Export Citation Format

Share Document