From equivalence queries to PAC learning: The case of implication theories

2020 ◽  
Vol 127 ◽  
pp. 1-16
Author(s):  
Ramil Yarullin ◽  
Sergei Obiedkov
1995 ◽  
Vol 2 ◽  
pp. 501-539 ◽  
Author(s):  
W. W. Cohen

We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.


Author(s):  
Hanrui Zhang ◽  
Vincent Conitzer

Specifying the objective function that an AI system should pursue can be challenging. Especially when the decisions to be made by the system have a moral component, input from multiple stakeholders is often required. We consider approaches that query them about their judgments in individual examples, and then aggregate these judgments into a general policy. We propose a formal learning-theoretic framework for this setting. We then give general results on how to translate classical results from PAC learning into results in our framework. Subsequently, we show that in some settings, better results can be obtained by working directly in our framework. Finally, we discuss how our model can be extended in a variety of ways for future research.


2020 ◽  
Vol 69 ◽  
Author(s):  
Benjamin Fish ◽  
Lev Reyzin

In the problem of learning a class ratio from unlabeled data, which we call CR learning, the training data is unlabeled, and only the ratios, or proportions, of examples receiving each label are given. The goal is to learn a hypothesis that predicts the proportions of labels on the distribution underlying the sample. This model of learning is applicable to a wide variety of settings, including predicting the number of votes for candidates in political elections from polls. In this paper, we formally define this class and resolve foundational questions regarding the computational complexity of CR learning and characterize its relationship to PAC learning. Among our results, we show, perhaps surprisingly, that for finite VC classes what can be efficiently CR learned is a strict subset of what can be learned efficiently in PAC, under standard complexity assumptions. We also show that there exist classes of functions whose CR learnability is independent of ZFC, the standard set theoretic axioms. This implies that CR learning cannot be easily characterized (like PAC by VC dimension).


Sign in / Sign up

Export Citation Format

Share Document