Shortening of Proof Length is Elusive for Theorem Provers

2020 ◽  
Vol 30 (04) ◽  
pp. 2050013
Author(s):  
Santiago Hernández-Orozco ◽  
Francisco Hernández-Quiroz ◽  
Hector Zenil ◽  
Wilfried Sieg

There are many examples of failed strategies whose intention is to optimize a process but instead they produce worse results than no strategy at all. Many fall under the loose umbrella of the “no free lunch theorem”. In this paper we present an example in which a simple (but assumedly naive) strategy intended to shorten proof lengths in the propositional calculus produces results that are significantly worse than those achieved without any method to try to shorten proofs.This contrast with what was to be expected intuitively, namely no improvement in the length of the proofs. Another surprising result is how early the naive strategy failed. We set up a experiment in which we sample random classical propositional theorems and then feed them to two very popular automatic theorem provers (AProS and Prover9). We then compared the length of the proofs obtained under two methods: (1) the application of the theorem provers with no additional information; (2) the addition of new (redundant) axioms to the provers. The second method produced even longer proofs than the first one.

10.29007/n6j7 ◽  
2018 ◽  
Author(s):  
Simon Cruanes

We argue that automatic theorem provers should become more versatile and should be able to tackle problems expressed in richer input formats. Salient research directions include (i) developing tight combinations of SMT solvers and first-order provers; (ii) adding better handling of theories in first-order provers; (iii) adding support for inductive proving; (iv) adding support for user-defined theories and functions; and (v) bringing to the provers some basic abilities to deal with logics beyond first-order, such as higher-order logic.


Author(s):  
Lidong Wu

The No-Free-Lunch theorem is an interesting and important theoretical result in machine learning. Based on philosophy of No-Free-Lunch theorem, we discuss extensively on the limitation of a data-driven approach in solving NP-hard problems.


Episteme ◽  
2019 ◽  
pp. 1-15
Author(s):  
Gerhard Schurz

AbstractWhite (2015) proposes an a priori justification of the reliability of inductive prediction methods based on his thesis of induction-friendliness. It asserts that there are by far more induction-friendly event sequences than induction-unfriendly event sequences. In this paper I contrast White's thesis with the famous no free lunch (NFL) theorem. I explain two versions of this theorem, the strong NFL theorem applying to binary and the weak NFL theorem applying to real-valued predictions. I show that both versions refute the thesis of induction-friendliness. In the conclusion I argue that an a priori justification of the reliability of induction based on a uniform probability distribution over possible event sequences is impossible. In the outlook I consider two alternative approaches: (i) justification externalism and (ii) optimality justifications.


Sign in / Sign up

Export Citation Format

Share Document