theorem provers
Recently Published Documents


TOTAL DOCUMENTS

187
(FIVE YEARS 17)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Pasquale Minervini ◽  
Sebastian Riedel ◽  
Pontus Stenetorp ◽  
Edward Grefenstette ◽  
Tim Rocktäschel

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. All source code and datasets are available online. (At https://github.com/uclnlp/ctp)


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-28
Author(s):  
Chandrakana Nandi ◽  
Max Willsey ◽  
Amy Zhu ◽  
Yisu Remy Wang ◽  
Brett Saiki ◽  
...  

Many compilers, synthesizers, and theorem provers rely on rewrite rules to simplify expressions or prove equivalences. Developing rewrite rules can be difficult: rules may be subtly incorrect, profitable rules are easy to miss, and rulesets must be rechecked or extended whenever semantics are tweaked. Large rulesets can also be challenging to apply: redundant rules slow down rule-based search and frustrate debugging. This paper explores how equality saturation, a promising technique that uses e-graphs to apply rewrite rules, can also be used to infer rewrite rules. E-graphs can compactly represent the exponentially large sets of enumerated terms and potential rewrite rules. We show that equality saturation efficiently shrinks both sets, leading to faster synthesis of smaller, more general rulesets. We prototyped these strategies in a tool dubbed Ruler. Compared to a similar tool built on CVC4, Ruler synthesizes 5.8× smaller rulesets 25× faster without compromising on proving power. In an end-to-end case study, we show Ruler-synthesized rules which perform as well as those crafted by domain experts, and addressed a longstanding issue in a popular open source tool.


Author(s):  
Martin Suda

AbstractWe re-examine the topic of machine-learned clause selection guidance in saturation-based theorem provers. The central idea, recently popularized by the ENIGMA system, is to learn a classifier for recognizing clauses that appeared in previously discovered proofs. In subsequent runs, clauses classified positively are prioritized for selection. We propose several improvements to this approach and experimentally confirm their viability. For the demonstration, we use a recursive neural network to classify clauses based on their derivation history and the presence or absence of automatically supplied theory axioms therein. The automatic theorem prover Vampire guided by the network achieves a 41 % improvement on a relevant subset of SMT-LIB in a real time evaluation.


Author(s):  
Petra Hozzová ◽  
Laura Kovács ◽  
Andrei Voronkov

AbstractIntegers are ubiquitous in programming and therefore also in applications of program analysis and verification. Such applications often require some sort of inductive reasoning. In this paper we analyze the challenge of automating inductive reasoning with integers. We introduce inference rules for integer induction within the saturation framework of first-order theorem proving. We implemented these rules in the theorem prover Vampire and evaluated our work against other state-of-the-art theorem provers. Our results demonstrate the strength of our approach by solving new problems coming from program analysis and mathematical properties of integers.


Author(s):  
Alexander Bentkamp ◽  
Jasmin Blanchette ◽  
Sophie Tourret ◽  
Petar Vukmirović

AbstractWe recently designed two calculi as stepping stones towards superposition for full higher-order logic: Boolean-free $$\lambda $$ λ -superposition and superposition for first-order logic with interpreted Booleans. Stepping on these stones, we finally reach a sound and refutationally complete calculus for higher-order logic with polymorphism, extensionality, Hilbert choice, and Henkin semantics. In addition to the complexity of combining the calculus’s two predecessors, new challenges arise from the interplay between $$\lambda $$ λ -terms and Booleans. Our implementation in Zipperposition outperforms all other higher-order theorem provers and is on a par with an earlier, pragmatic prototype of Booleans in Zipperposition.


Author(s):  
Jan Gorzny ◽  
Ezequiel Postan ◽  
Bruno Woltzenlogel Paleo

Abstract Proofs are a key feature of modern propositional and first-order theorem provers. Proofs generated by such tools serve as explanations for unsatisfiability of statements. However, these explanations are complicated by proofs which are not necessarily as concise as possible. There are a wide variety of compression techniques for propositional resolution proofs but fewer compression techniques for first-order resolution proofs generated by automated theorem provers. This paper describes an approach to compressing first-order logic proofs based on lifting proof compression ideas used in propositional logic to first-order logic. The first approach lifted from propositional logic delays resolution with unit clauses, which are clauses that have a single literal. The second approach is partial regularization, which removes an inference $\eta $ when it is redundant in the sense that its pivot literal already occurs as the pivot of another inference in every path from $\eta $ to the root of the proof. This paper describes the generalization of the algorithms LowerUnits and RecyclePivotsWithIntersection (Fontaine et al.. Compression of propositional resolution proofs via partial regularization. In Automated Deduction—CADE-23—23rd International Conference on Automated Deduction, Wroclaw, Poland, July 31–August 5, 2011, p. 237--251. Springer, 2011) from propositional logic to first-order logic. The generalized algorithms compresses resolution proofs containing resolution and factoring inferences with unification. An empirical evaluation of these approaches is included.


2020 ◽  
Vol 30 (04) ◽  
pp. 2050013
Author(s):  
Santiago Hernández-Orozco ◽  
Francisco Hernández-Quiroz ◽  
Hector Zenil ◽  
Wilfried Sieg

There are many examples of failed strategies whose intention is to optimize a process but instead they produce worse results than no strategy at all. Many fall under the loose umbrella of the “no free lunch theorem”. In this paper we present an example in which a simple (but assumedly naive) strategy intended to shorten proof lengths in the propositional calculus produces results that are significantly worse than those achieved without any method to try to shorten proofs.This contrast with what was to be expected intuitively, namely no improvement in the length of the proofs. Another surprising result is how early the naive strategy failed. We set up a experiment in which we sample random classical propositional theorems and then feed them to two very popular automatic theorem provers (AProS and Prover9). We then compared the length of the proofs obtained under two methods: (1) the application of the theorem provers with no additional information; (2) the addition of new (redundant) axioms to the provers. The second method produced even longer proofs than the first one.


10.29007/3r41 ◽  
2020 ◽  
Author(s):  
Jan Gorzny ◽  
Ezequiel Postan ◽  
Bruno Woltzenlogel Paleo

Proofs are a key feature of modern propositional and first-order theorem provers. Proofs generated by such tools serve as explanations for unsatisfiability of statements. However, these explanations are complicated by proofs which are not necessarily as concise as possible. There are a wide variety of compression techniques for propositional resolution proofs, but fewer compression techniques for first-order resolution proofs generated by automated theorem provers. This paper describes an approach to compressing first-order logic proofs based on lifting proof compression ideas used in propositional logic to first-order logic. An empirical evaluation of the approach is included.


2020 ◽  
Vol 34 (10) ◽  
pp. 13919-13920
Author(s):  
Agnieszka Słowik ◽  
Chaitanya Mangla ◽  
Mateja Jamnik ◽  
Sean B. Holden ◽  
Lawrence C. Paulson

Modern theorem provers utilise a wide array of heuristics to control the search space explosion, thereby requiring optimisation of a large set of parameters. An exhaustive search in this multi-dimensional parameter space is intractable in most cases, yet the performance of the provers is highly dependent on the parameter assignment. In this work, we introduce a principled probabilistic framework for heuristic optimisation in theorem provers. We present results using a heuristic for premise selection and the Archive of Formal Proofs (AFP) as a case study.


Sign in / Sign up

Export Citation Format

Share Document