An abstract form of the first epsilon theorem

2020 ◽  
Vol 30 (8) ◽  
pp. 1447-1468
Author(s):  
Matthias Baaz ◽  
Alexander Leitsch ◽  
Anela Lolic

Abstract We present a new method of computing Herbrand disjunctions. The up-to-date most direct approach to calculate Herbrand disjunctions is based on Hilbert’s epsilon formalism (which is in fact also the oldest framework for proof theory). The algorithm to calculate Herbrand disjunctions is an integral part of the proof of the extended first epsilon theorem. This paper introduces a more abstract form of epsilon proofs, the function variable proofs. This leads to a computational improved version of the extended first epsilon theorem, which allows a nonelementary speed up of the computation of Herbrand disjunctions. As an application, sequent calculus proofs are translated into function variable proofs and a variant of the axiom of global choice is shown to be removable from proofs in Neumann–Bernays–Gödel set theory.

2013 ◽  
Vol 13 (01) ◽  
pp. 1350003 ◽  
Author(s):  
TOSHIYASU ARAI

We show that the existence of a weakly compact cardinal over the Zermelo–Fraenkel's set theory ZF is proof-theoretically reducible to iterations of Mostowski collapsings and Mahlo operations.


2018 ◽  
Vol 6 (5) ◽  
pp. 447-458
Author(s):  
Yizhou Chen ◽  
Jiayang Wang

Abstract On the basis of rough set theory, the strengths of dynamic reduction are elaborated compared with traditional non-dynamic methods. A systematic concept of dynamic reduction from sampling process to the generation of the reduct set is presented. A new method of sampling is created to avoid the defects of being too subjective. And in order to deal with the over-sized time consuming problem in traditional dynamic reduction process, a quick algorithm is proposed within the constraint conditions. We have also proved that dynamic core possesses the essential characteristics of a reduction core on the basis of the formalized definition of the multi-layered dynamic core.


Author(s):  
John P. Burgess

This article explores the role of logic in philosophical methodology, as well as its application in philosophy. The discussion gives a roughly equal coverage to the seven branches of logic: elementary logic, set theory, model theory, recursion theory, proof theory, extraclassical logics, and anticlassical logics. Mathematical logic comprises set theory, model theory, recursion theory, and proof theory. Philosophical logic in the relevant sense is divided into the study of extensions of classical logic, such as modal or temporal or deontic or conditional logics, and the study of alternatives to classical logic, such as intuitionistic or quantum or partial or paraconsistent logics. The nonclassical consists of the extraclassical and the anticlassical, although the distinction is not clearcut.


10.29007/p1fd ◽  
2018 ◽  
Author(s):  
Ozan Kahramanogullari

The deep inference presentation of multiplicative exponential linear logic (MELL) benefits from a rich combinatoric analysis with many more proofs in comparison to its sequent calculus presentation. In the deep inference setting, all the sequent calculus proofs are preserved. Moreover, many other proofs become available, and some of these proofs are much shorter. However, proof search in deep inference is subject to a greater nondeterminism, and this nondeterminism constitutes a bottleneck for applications. To this end, we address the problem of reducing nondeterminism in MELL by refining and extending our technique that has been previously applied to multiplicative linear logic and classical logic. We show that, besides the nondeterminism in commutative contexts, the nondeterminism in exponential contexts can be reduced in a proof theoretically clean manner. The method conserves the exponential speed-up in proof construction due to deep inference, exemplified by Statman tautologies. We validate the improvement in accessing the shorter proofs by experiments with our implementations.


2019 ◽  
Vol 25 (4) ◽  
pp. 429-445
Author(s):  
ASAF KARAGILA

AbstractJ. L. Krivine developed a new method based on realizability to construct models of set theory where the axiom of choice fails. We attempt to recreate his results in classical settings, i.e., symmetric extensions. We also provide a new condition for preserving well ordered, and other particular type of choice, in the general settings of symmetric extensions.


Author(s):  
Wilfried Sieg

Proof theory is a branch of mathematical logic founded by David Hilbert around 1920 to pursue Hilbert’s programme. The problems addressed by the programme had already been formulated, in some sense, at the turn of the century, for example, in Hilbert’s famous address to the First International Congress of Mathematicians in Paris. They were closely connected to the set-theoretic foundations for analysis investigated by Cantor and Dedekind – in particular, to difficulties with the unrestricted notion of system or set; they were also related to the philosophical conflict with Kronecker on the very nature of mathematics. At that time, the central issue for Hilbert was the ‘consistency of sets’ in Cantor’s sense. Hilbert suggested that the existence of consistent sets, for example, the set of real numbers, could be secured by proving the consistency of a suitable, characterizing axiom system, but indicated only vaguely how to give such proofs model-theoretically. Four years later, Hilbert departed radically from these indications and proposed a novel way of attacking the consistency problem for theories. This approach required, first of all, a strict formalization of mathematics together with logic; then, the syntactic configurations of the joint formalism would be considered as mathematical objects; finally, mathematical arguments would be used to show that contradictory formulas cannot be derived by the logical rules. This two-pronged approach of developing substantial parts of mathematics in formal theories (set theory, second-order arithmetic, finite type theory and still others) and of proving their consistency (or the consistency of significant sub-theories) was sharpened in lectures beginning in 1917 and then pursued systematically in the 1920s by Hilbert and a group of collaborators including Paul Bernays, Wilhelm Ackermann and John von Neumann. In particular, the formalizability of analysis in a second-order theory was verified by Hilbert in those very early lectures. So it was possible to focus on the second prong, namely to establish the consistency of ‘arithmetic’ (second-order number theory and set theory) by elementary mathematical, ‘finitist’ means. This part of the task proved to be much more recalcitrant than expected, and only limited results were obtained. That the limitation was inevitable was explained in 1931 by Gödel’s theorems; indeed, they refuted the attempt to establish consistency on a finitist basis – as soon as it was realized that finitist considerations could be carried out in a small fragment of first-order arithmetic. This led to the formulation of a general reductive programme. Gentzen and Gödel made the first contributions to this programme by establishing the consistency of classical first-order arithmetic – Peano arithmetic (PA) – relative to intuitionistic arithmetic – Heyting arithmetic. In 1936 Gentzen proved the consistency of PA relative to a quantifier-free theory of arithmetic that included transfinite recursion up to the first epsilon number, ε0; in his 1941 Yale lectures, Gödel proved the consistency of the same theory relative to a theory of computable functionals of finite type. These two fundamental theorems turned out to be most important for subsequent proof-theoretic work. Currently it is known how to analyse, in Gentzen’s style, strong subsystems of second-order arithmetic and set theory. The first prong of proof-theoretic investigations, the actual formal development of parts of mathematics, has also been pursued – with a surprising result: the bulk of classical analysis can be developed in theories that are conservative over (fragments of) first-order arithmetic.


1991 ◽  
Vol 02 (01) ◽  
pp. 418-422
Author(s):  
PETRI MÄHÖNEN ◽  
VEIKKO PUNKKA

A new method of calculating numerically time evolution of a gravitational field in General Relatity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Tengfei Zhang ◽  
Fumin Ma ◽  
Jie Cao ◽  
Chen Peng ◽  
Dong Yue

Parallel attribute reduction is one of the most important topics in current research on rough set theory. Although some parallel algorithms were well documented, most of them are still faced with some challenges for effectively dealing with the complex heterogeneous data including categorical and numerical attributes. Aiming at this problem, a novel attribute reduction algorithm based on neighborhood multigranulation rough sets was developed to process the massive heterogeneous data in the parallel way. The MapReduce-based parallelization method for attribute reduction was proposed in the framework of neighborhood multigranulation rough sets. To improve the reduction efficiency, the hashing Map/Reduce functions were designed to speed up the positive region calculation. Thereafter, a quick parallel attribute reduction algorithm using MapReduce was developed. The effectiveness and superiority of this parallel algorithm were demonstrated by theoretical analysis and comparison experiments.


1975 ◽  
Vol 40 (2) ◽  
pp. 113-129 ◽  
Author(s):  
Harvey Friedman

This expository paper contains a list of 102 problems which, at the time of publication, are unsolved. These problems are distributed in four subdivisions of logic: model theory, proof theory and intuitionism, recursion theory, and set theory. They are written in the form of statements which we believe to be at least as likely as their negations. These should not be viewed as conjectures since, in some cases, we had no opinion as to which way the problem would go.In each case where we believe a problem did not originate with us, we made an effort to pinpoint a source. Often this was a difficult matter, based on subjective judgments. When we were unable to pinpoint a source, we left a question mark. No inference should be drawn concerning the beliefs of the originator of a problem as to which way it will go (lest the originator be us).The choice of these problems was based on five criteria. Firstly, we are only including problems which call for the truth value of a particular mathematical statement. A second criterion is the extent to which the concepts involved in the statements are concepts that are well known, well denned, and well understood, as well as having been extensively considered in the literature. A third criterion is the extent to which these problems have natural, simple and attractive formulations. A fourth criterion is the extent to which there is evidence that a real difficulty exists in finding a solution. Lastly and unavoidably, the extent to which these problems are connected with the author's research interests in mathematical logic.


Sign in / Sign up

Export Citation Format

Share Document