minimal logic
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 14)

H-INDEX

11
(FIVE YEARS 1)

Author(s):  
Lew Gordeev ◽  
Edward Hermann Haeusler

In [3] we proved the conjecture NP = PSPACE by advanced proof theoretic methods that combined Hudelmaier's cut-free sequent calculus for minimal logic (HSC) [5] with the horizontal compressing in the corresponding minimal Prawitz-style natural deduction (ND) [6]. In this Addendum we show how to prove a weaker result NP = coNP without referring to HSC. The underlying idea (due to the second author) is to omit full minimal logic and compress only \naive" normal tree-like ND refutations of the existence of Hamiltonian cycles in given non-Hamiltonian graphs, since the Hamiltonian graph problem in NP-complete. Thus, loosely speaking, the proof of NP = coNP can be obtained by HSC-elimination from our proof of NP = PSPACE [3].


2021 ◽  
Vol 27 (11) ◽  
pp. 1193-1202
Author(s):  
Ashot Baghdasaryan ◽  
Hovhannes Bolibekyan

There are three main problems for theorem proving with a standard cut-free system for the first order minimal logic. The first problem is the possibility of looping. Secondly, it might generate proofs which are permutations of each other. Finally, during the proof some choice should be made to decide which rules to apply and where to use them. New systems with history mechanisms were introduced for solving the looping problems of automated theorem provers in the first order minimal logic. In order to solve the rule selection problem, recurrent neural networks are deployed and they are used to determine which formula from the context should be used on further steps. As a result, it yields to the reduction of time during theorem proving.


2021 ◽  
Vol 27 (11) ◽  
pp. 1149-1151
Author(s):  
Nelson Baloian ◽  
José Pino

Modern technologies and various domains of human activities increasingly rely on data science to develop smarter and autonomous systems. This trend has already changed the whole landscape of the global economy becoming more AI-driven. Massive production of data by humans and machines, its availability for feasible processing with advent of deep learning infrastructures, combined with advancements in reliable information transfer capacities, open unbounded horizons for societal progress in close future. Quite naturally, this brings also new challenges for science and industry. In that context, Internet of things (IoT) is an enormously huge factory of monitoring and data generation. It enables countless devices to act as sensors which record and manipulate data, while requiring efficient algorithms to derive actionable knowledge. Billions of end-users equipped with smart mobile phones are also producing immensely large volumes of data, being it about user interaction or indirect telemetry such as location coordinates. Social networks represent another kind of data-intensive sources, with both structured and unstructured components, containing valuable information about world’s connectivity, dynamism, and more. Last but not least, to help businesses run smoothly, today’s cloud computing infrastructures and applications are also serviced and managed through measuring huge amounts of data to leverage in various predictive and automation tasks for healthy performance and permanent availability. Therefore, all these technology areas, experts and practitioners, are facing innovation challenges on building novel methodologies, accurate models, and systems for respective data-driven solutions which are effective and efficient. In view of the complexity of contemporary neural network architectures and models with millions of parameters they derive, one of such challenges is related to the concept of explainability of the machine learning models. It refers to the ability of the model to give information which can be interpreted by humans about the reasons for the decision made or recommendation released. These challenges can only be met with a mix of basic research, process modeling and simulation under uncertainty using qualitative and quantitative methods from the involved sciences, and taking into account international standards and adequate evaluation methods. Based on a successful funded collaboration between the American University of Armenia, the University of Duisburg-Essen and the University of Chile, in previous years a network was built, and in September 2020 a group of researchers gathered (although virtually) for the 2nd CODASSCA workshop on “Collaborative Technologies and Data Science in Smart City Applications”. This event has attracted 25 paper submissions which deal with the problems and challenges mentioned above. The studies are in specialized areas and disclose novel solutions and approaches based on existing theories suitably applied. The authors of the best papers published in the conference proceedings on Collaborative Technologies and Data Science in Artificial Intelligence Applications by Logos edition Berlin were invited to submit significantly extended and improved versions of their contributions to be considered for a journal special issue of J.UCS. There was also a J.UCS open call so that any author could submit papers on the highlighted subject. For this volume, we selected those dealing with more theoretical issues which were rigorously reviewed in three rounds and 6 papers nominated to be published. The editors would like to express their gratitude to J.UCS foundation for accepting the special issues in their journal, to the German Research Foundation (DFG), the German Academic Exchange Service (DAAD) and the universities and sponsors involved for funding the common activities and thank the editors of the CODASSCA2020 proceedings for their ongoing encouragement and support, the authors for their contributions, and the anonymous reviewers for their invaluable support. The paper “Incident Management for Explainable and Automated Root Cause Analysis in Cloud Data Centers” by Arnak Poghosyan, Ashot Harutyunyan, Naira Grigoryan, and Nicholas Kushmerick addresses an increasingly important problem towards autonomous or self-X systems, intelligent management of modern cloud environments with an emphasis on explainable AI. It demonstrates techniques and methods that greatly help in automated discovery of explicit conditions leading to data center incidents. The paper “Temporal Accelerators: Unleashing the Potential of Embedded FPGAs” by Christopher Cichiwskyj and Gregor Schiele presents an approach for executing computational tasks that can be split into sequential sub-tasks. It divides accelerators into multiple, smaller parts and uses the reconfiguration capabilities of the FPGA to execute the parts according to a task graph. That improves the energy consumption and the cost of using FPGAs in IoT devices. The paper “On Recurrent Neural Network based Theorem Prover for First Order Minimal Logic” by Ashot Baghdasaryan and Hovhannes Bolibekyan investigates using recurrent neural networks to determine the order of proof search in a sequent calculus for first-order minimal logic with a history mechanism. It demonstrates reduced durations in automated theorem proving systems.  The paper “Incremental Autoencoders for Text Streams Clustering in Social Networks” by Amal Rekik and Salma Jamoussi proposes a deep learning method to identify trending topics in a social network. It is built on detecting changes in streams of tweets. The method is experimentally validated to outperform relevant data stream algorithms in identifying “hot” topics. The paper “E-Capacity–Equivocation Region of Wiretap Channel” by Mariam Haroutunian studies a secure communication problem over the wiretap channel, where information transfer from the source to a legitimate receiver needs to be realized maximally secretly for an eavesdropper. This is an information-theoretic research which generalizes the capacity-equivocation region and secrecy-capacity function of the wiretap channel subject to error exponent criterion, thus deriving new and extended fundamental limits in reliable and secure communication in presence of a wiretapper. The paper “Leveraging Multifaceted Proximity Measures among Developers in Predicting Future Collaborations to Improve the Social Capital of Software Projects” by Amit Kumar and Sonali Agarwal targets improving the social capital of individual software developers and projects using machine learning. Authors’ approach applies network proximity and developer activity features to build a classifier for predicting the future collaborations among developers and generating relevant recommendations. 


Author(s):  
Amy Felty ◽  
Carlos Olarte ◽  
Bruno Xavier

Abstract Linear logic (LL) has been used as a foundation (and inspiration) for the development of programming languages, logical frameworks, and models for concurrency. LL’s cut-elimination and the completeness of focusing are two of its fundamental properties that have been exploited in such applications. This paper formalizes the proof of cut-elimination for focused LL. For that, we propose a set of five cut-rules that allows us to prove cut-elimination directly on the focused system. We also encode the inference rules of other logics as LL theories and formalize the necessary conditions for those logics to have cut-elimination. We then obtain, for free, cut-elimination for first-order classical, intuitionistic, and variants of LL. We also use the LL metatheory to formalize the relative completeness of natural deduction and sequent calculus in first-order minimal logic. Hence, we propose a framework that can be used to formalize fundamental properties of logical systems specified as LL theories.


2021 ◽  
Vol 62 (5) ◽  
pp. 876-881
Author(s):  
L. L. Maksimova ◽  
V. F. Yun
Keyword(s):  

2021 ◽  
Vol 62 (5) ◽  
pp. 1084-1090
Author(s):  
L. L. Maksimova ◽  
V. F. Yun
Keyword(s):  

Author(s):  
Lew Gordeev ◽  
Edward Hermann Haeusler

We upgrade [3] to a complete proof of the conjecture NP = PSPACE that is known as one of the fundamental open problems in the mathematical theory of computational complexity; this proof is based on [2]. Since minimal propositional logic is known to be PSPACE complete, while PSPACE to include NP, it suffices to show that every valid purely implicational formula ρ has a proof whose weight (= total number of symbols) and time complexity of the provability involved are both polynomial in the weight of ρ. As in [3], we use proof theoretic approach. Recall that in [3] we considered any valid ρ in question that had (by the definition of validity) a "short" tree-like proof π in the Hudelmaier-style cutfree sequent calculus for minimal logic. The "shortness" means that the height of π and the total weight of different formulas occurring in it are both polynomial in the weight of ρ. However, the size (= total number of nodes), and hence also the weight, of π could be exponential in that of ρ. To overcome this trouble we embedded π into Prawitz's proof system of natural deductions containing single formulas, instead of sequents. As in π, the height and the total weight of different formulas of the resulting tree-like natural deduction ∂1 were polynomial, although the size of ∂1 still could be exponential, in the weight of ρ. In our next, crucial move, ∂1 was deterministically compressed into a "small", although multipremise, dag-like deduction ∂ whose horizontal levels contained only mutually different formulas, which made the whole weight polynomial in that of ρ. However, ∂ required a more complicated verification of the underlying provability of ρ. In this paper we present a nondeterministic compression of ∂ into a desired standard dag-like deduction ∂0 that deterministically proves ρ in time and space polynomial in the weight of ρ. Together with [3] this completes the proof of NP = PSPACE. Natural deductions are essential for our proof. Tree-to-dag horizontal compression of π merging equal sequents, instead of formulas, is (possible but) not sufficient, since the total number of different sequents in π might be exponential in the weight of ρ − even assuming that all formulas occurring in sequents are subformulas of ρ. On the other hand, we need Hudelmaier's cutfree sequent calculus in order to control both the height and total weight of different formulas of the initial tree-like proof π, since standard Prawitz's normalization although providing natural deductions with the subformula property does not preserve polynomial heights. It is not clear yet if we can omit references to π even in the proof of the weaker result NP = coNP.


2020 ◽  
Vol 59 (7-8) ◽  
pp. 905-924
Author(s):  
Hannes Diener ◽  
Maarten McKubre-Jordens
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document