deduction rules
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 13)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 22 (3) ◽  
pp. 1-29
Author(s):  
Simone Martini ◽  
Andrea Masini ◽  
Margherita Zorzi

We extend to natural deduction the approach of Linear Nested Sequents and of 2-Sequents. Formulas are decorated with a spatial coordinate, which allows a formulation of formal systems in the original spirit of natural deduction: only one introduction and one elimination rule per connective, no additional (structural) rule, no explicit reference to the accessibility relation of the intended Kripke models. We give systems for the normal modal logics from K to S4. For the intuitionistic versions of the systems, we define proof reduction, and prove proof normalization, thus obtaining a syntactical proof of consistency. For logics K and K4 we use existence predicates (à la Scott) for formulating sound deduction rules.


Author(s):  
Michael Kofi Afriyie ◽  
Vincent Mwintieru Nofong ◽  
John Wondoh ◽  
Hamidu Abdel-Fatao

Periodic frequent patterns are frequent patterns which occur at periodic intervals in databases. They are useful in decision making where event occurrence intervals are vital. Traditional algorithms for discovering periodic frequent patterns, however, often report a large number of such patterns, most of which are often redundant as their periodic occurrences can be derived from other periodic frequent patterns. Using such redundant periodic frequent patterns in decision making would often be detrimental, if not trivial. This paper addresses the challenge of eliminating redundant periodic frequent patterns by employing the concept of deduction rules in mining and reporting only the set of non-redundant periodic frequent patterns. It subsequently proposes and develops a Non-redundant Periodic Frequent Pattern Miner (NPFPM) to achieve this purpose. Experimental analysis on benchmark datasets shows that NPFPM is efficient and can effectively prune the set of redundant periodic frequent patterns.


This research proposal is on provable forms based on s yntactic theorem using Kleene Axiom schema. Enact model I and II of propositional formulas from enactment logic are proven in terms of theorems based on deductive rules. Work proves by deduction rules that Enact Model I and II are model theorems[1] in machinelevel interpretation. Enactprover is a machine program for reading and writing Kleene theorem proving axioms based one enactment logic.


Author(s):  
Frank Appiah

This research poster is on provable forms based on syntactic theorem using Kleene Axiom schema. Enact model I and II of propositional formulas from enactment logic are proven in terms of theorems based on deductive rules. Work proves by deduction rules that Enact Model I and II are model theorems in machine- level interpretation. Enactprover is a machine program for reading and writing Kleene theorem proving axioms based on enactment logic.


Author(s):  
Frank Appiah

This research proposal is on provable forms based on s yntactic theorem using Kleene Axiom schema. Enact model I and II of propositional formulas from enactment logic are proven in terms of theorems based on deductive rules. Work proves by deduction rules that Enact Model I and II are model theorems[1] in machinelevel interpretation. Enactprover is a machineprogramforreadingandwritingKleenetheoremprovingaxiomsbased onenactment logic.


Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1584
Author(s):  
Wolfgang Schreiner ◽  
William Steingartner ◽  
Valerie Novitzká

We present a categorical formalization of a variant of first-order logic. Unlike other texts on this topic, the goal of this paper is to give a very transparent and self-contained account without requiring more background than basic logic and set theory. Our focus is to show how the semantics of first-order formulas can be derived from their usual deduction rules. For understanding the core ideas, it is not necessary to investigate the internal term structure of atomic formulas, thus we abstract atomic formulas to (syntactically opaque) relations; in this sense, our variant of first-order logic is “relational”. While the derived semantics is based on categorical principles (even the duality that arises from a symmetry between two ways of looking at something where there is no reason to choose one over the other), it is nevertheless “constructive” in that it describes explicit computations of the truth values of formulas. We demonstrate this by modeling the categorical semantics in the RISCAL (RISC Algorithm Language) system which allows us to validate the core propositions by automatically checking them in finite models.


2020 ◽  
pp. 31-67
Author(s):  
Timothy Williamson

This chapter argues that the Suppositional Rule is a fallible heuristic, because it has inconsistent consequences. They arise in several ways: (i) it implies standard natural deduction rules for ‘if’, and analogous but incompatible rules for refutation in place of proof; (ii) it implies the equation of the probability of ‘If A, C’ with the conditional probability of C on A, which is subject to the trivialization results of David Lewis and others; (iii) its application to complex attitudes generates further inconsistencies. The Suppositional Rule is compared to inconsistent principles built into other linguistic practices: disquotation for ‘true’ and ‘false’ generate Liar-like paradoxes; tolerance principles for vague expressions generate sorites paradoxes. Their status as fallible, semantically invalid but mostly reliable heuristics is not immediately available to competent speakers.


Author(s):  
Timothy Williamson

The book argues that our use of conditionals is governed by imperfectly reliable heuristics, in the psychological sense of fast and frugal (or quick and dirty) ways of assessing them. The primary heuristic is this: to assess ‘If A, C’, suppose A and on that basis assess C; whatever attitude you take to C conditionally on A (such as acceptance, rejection, or something in between) take unconditionally to ‘If A, C’. This heuristic yields both the equation of the probability of ‘If A, C’ with the conditional probability of C on A and standard natural deduction rules for the conditional. However, these results can be shown to make the heuristic implicitly inconsistent, and so less than fully reliable. There is also a secondary heuristic: pass conditionals freely from one context to another under normal conditions for acceptance of sentences on the basis of memory and testimony. The effect of the secondary heuristic is to undermine interpretations on which ‘if’ introduces a special kind of context-sensitivity. On the interpretation which makes best sense of the two heuristics, ‘if’ is simply the truth-functional conditional. Apparent counterexamples to truth-functionality are artefacts of reliance on the primary heuristic in cases where it is unreliable. The second half of the book concerns counterfactual conditionals, as expressed with ‘if’ and ‘would’. It argues that ‘would’ is an independently meaningful modal operator for contextually restricted necessity: the meaning of counterfactuals is simply that derived compositionally from the meanings of their constituents, including ‘if’ and ‘would’, making them contextually restricted strict conditionals.


2020 ◽  
Vol 30 (1) ◽  
pp. 62-117
Author(s):  
Colin Riba

AbstractThis paper surveys a new perspective on tree automata and Monadic second-order logic (MSO) on infinite trees. We show that the operations on tree automata used in the translations of MSO-formulae to automata underlying Rabin’s Tree Theorem (the decidability of MSO) correspond to the connectives of Intuitionistic Multiplicative Exponential Linear Logic (IMELL). Namely, we equip a variant of usual alternating tree automata (that we call uniform tree automata) with a fibered monoidal-closed structure which in particular handles a linear complementation of alternating automata. Moreover, this monoidal structure is actually Cartesian on non-deterministic automata, and an adaptation of a usual construction for the simulation of alternating automata by non-deterministic ones satisfies the deduction rules of the !(–) exponential modality of IMELL. (But this operation is unfortunately not a functor because it does not preserve composition.) Our model of IMLL consists in categories of games which are based on usual categories of two-player linear sequential games called simple games, and which generalize usual acceptance games of tree automata. This model provides a realizability semantics, along the lines of Curry–Howard proofs-as-programs correspondence, of a linear constructive deduction system for tree automata. This realizability semantics, which can be summarized with the slogan “automata as objects, strategies as morphisms,” satisfies an expected property of witness extraction from proofs of existential statements. Moreover, it makes it possible to combine realizers produced as interpretations of proofs with strategies witnessing (non-)emptiness of tree automata.


Sign in / Sign up

Export Citation Format

Share Document