exact learning
Recently Published Documents


TOTAL DOCUMENTS

72
(FIVE YEARS 15)

H-INDEX

11
(FIVE YEARS 1)

Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 116
Author(s):  
Mikhail Moshkov

In this paper, based on the results of rough set theory, test theory, and exact learning, we investigate decision trees over infinite sets of binary attributes represented as infinite binary information systems. We define the notion of a problem over an information system and study three functions of the Shannon type, which characterize the dependence in the worst case of the minimum depth of a decision tree solving a problem on the number of attributes in the problem description. The considered three functions correspond to (i) decision trees using attributes, (ii) decision trees using hypotheses (an analog of equivalence queries from exact learning), and (iii) decision trees using both attributes and hypotheses. The first function has two possible types of behavior: logarithmic and linear (this result follows from more general results published by the author earlier). The second and the third functions have three possible types of behavior: constant, logarithmic, and linear (these results were published by the author earlier without proofs that are given in the present paper). Based on the obtained results, we divided the set of all infinite binary information systems into four complexity classes. In each class, the type of behavior for each of the considered three functions does not change.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1703
Author(s):  
Shouta Sugahara ◽  
Maomi Ueno

Earlier studies have shown that classification accuracies of Bayesian networks (BNs) obtained by maximizing the conditional log likelihood (CLL) of a class variable, given the feature variables, were higher than those obtained by maximizing the marginal likelihood (ML). However, differences between the performances of the two scores in the earlier studies may be attributed to the fact that they used approximate learning algorithms, not exact ones. This paper compares the classification accuracies of BNs with approximate learning using CLL to those with exact learning using ML. The results demonstrate that the classification accuracies of BNs obtained by maximizing the ML are higher than those obtained by maximizing the CLL for large data. However, the results also demonstrate that the classification accuracies of exact learning BNs using the ML are much worse than those of other methods when the sample size is small and the class variable has numerous parents. To resolve the problem, we propose an exact learning augmented naive Bayes classifier (ANB), which ensures a class variable with no parents. The proposed method is guaranteed to asymptotically estimate the identical class posterior to that of the exactly learned BN. Comparison experiments demonstrated the superior performance of the proposed method.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 587
Author(s):  
Srinivasan Arunachalam ◽  
Sourav Chakraborty ◽  
Troy Lee ◽  
Manaswi Paraashar ◽  
Ronald de Wolf

We present two new results about exact learning by quantum computers. First, we show how to exactly learn a k-Fourier-sparse n-bit Boolean function from O(k1.5(log⁡k)2) uniform quantum examples for that function. This improves over the bound of Θ~(kn) uniformly random classical examples (Haviv and Regev, CCC'15). Additionally, we provide a possible direction to improve our O~(k1.5) upper bound by proving an improvement of Chang's lemma for k-Fourier-sparse Boolean functions. Second, we show that if a concept class C can be exactly learned using Q quantum membership queries, then it can also be learned using O(Q2log⁡Qlog⁡|C|)classical membership queries. This improves the previous-best simulation result (Servedio and Gortler, SICOMP'04) by a log⁡Q-factor.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1580
Author(s):  
Mohammad Azad ◽  
Igor Chikalov ◽  
Shahid Hussain ◽  
Mikhail Moshkov

In this paper, we consider decision trees that use two types of queries: queries based on one attribute each and queries based on hypotheses about values of all attributes. Such decision trees are similar to the ones studied in exact learning, where membership and equivalence queries are allowed. We present dynamic programming algorithms for minimization of the depth and number of nodes of above decision trees and discuss results of computer experiments on various data sets and randomly generated Boolean functions. Decision trees with hypotheses generally have less complexity, i.e., they are more understandable and more suitable as a means for knowledge representation.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 808
Author(s):  
Mohammad Azad ◽  
Igor Chikalov ◽  
Shahid Hussain ◽  
Mikhail Moshkov

In this paper, we consider decision trees that use both conventional queries based on one attribute each and queries based on hypotheses of values of all attributes. Such decision trees are similar to those studied in exact learning, where membership and equivalence queries are allowed. We present greedy algorithm based on entropy for the construction of the above decision trees and discuss the results of computer experiments on various data sets and randomly generated Boolean functions.


2021 ◽  
Author(s):  
Benedict Irwin

Abstract We present a collection of mathematical tools and emphasise a fundamental representation of analytic functions. Connecting these concepts leads to a framework for `exact learning', where an unknown numeric distribution could in principle be assigned an exact mathematical description. This is a new perspective on machine learning with potential applications in all domains of the mathematical sciences and the generalised representations presented here have not yet been widely considered in the context of machine learning and data analysis. The moments of a multivariate function or distribution are extracted using a Mellin transform and the generalised form of the coefficients is trained assuming a highly generalised Mellin-Barnes integral representation. The functions use many fewer parameters than contemporary machine learning methods and any implementation that connects these concepts successfully will likely carry across to non-exact problems and provide approximate solutions. We compare the equations for the exact learning method with those for a neural network which leads to a new perspective on understanding what a neural network may be learning and how to interpret the parameters of those networks.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1142
Author(s):  
Zhigao Guo ◽  
Anthony C. Constantinou

Score-based algorithms that learn Bayesian Network (BN) structures provide solutions ranging from different levels of approximate learning to exact learning. Approximate solutions exist because exact learning is generally not applicable to networks of moderate or higher complexity. In general, approximate solutions tend to sacrifice accuracy for speed, where the aim is to minimise the loss in accuracy and maximise the gain in speed. While some approximate algorithms are optimised to handle thousands of variables, these algorithms may still be unable to learn such high dimensional structures. Some of the most efficient score-based algorithms cast the structure learning problem as a combinatorial optimisation of candidate parent sets. This paper explores a strategy towards pruning the size of candidate parent sets, and which could form part of existing score-based algorithms as an additional pruning phase aimed at high dimensionality problems. The results illustrate how different levels of pruning affect the learning speed relative to the loss in accuracy in terms of model fitting, and show that aggressive pruning may be required to produce approximate solutions for high complexity problems.


Author(s):  
Cosimo Persia ◽  
Ana Ozaki

We investigate learnability of possibilistic theories from entailments in light of Angluin’s exact learning model. We consider cases in which only membership, only equivalence, and both kinds of queries can be posed by the learner. We then show that, for a large class of problems, polynomial time learnability results for classical logic can be transferred to the respective possibilistic extension. In particular, it follows from our results that the possibilistic extension of propositional Horn theories is exactly learnable in polynomial time. As polynomial time learnability in the exact model is transferable to the classical probably approximately correct (PAC) model extended with membership queries, our work also establishes such results in this model.


2020 ◽  
Vol 34 (03) ◽  
pp. 2959-2966
Author(s):  
Ana Ozaki ◽  
Cosimo Persia ◽  
Andrea Mazzullo

We investigate the complexity of learning query inseparable εℒℋ ontologies in a variant of Angluin's exact learning model. Given a fixed data instance A* and a query language 𝒬, we are interested in computing an ontology ℋ that entails the same queries as a target ontology 𝒯 on A*, that is, ℋ and 𝒯 are inseparable w.r.t. A* and 𝒬. The learner is allowed to pose two kinds of questions. The first is ‘Does (𝒯,A)⊨ q?’, with A an arbitrary data instance and q and query in 𝒬. An oracle replies this question with ‘yes’ or ‘no’. In the second, the learner asks ‘Are ℋ and 𝒯 inseparable w.r.t. A* and 𝒬?’. If so, the learning process finishes, otherwise, the learner receives (A*,q) with q ∈ 𝒬, (𝒯,A*) |= q and (ℋ,A*) ⊭ q (or vice-versa). Then, we analyse conditions in which query inseparability is preserved if A* changes. Finally, we consider the PAC learning model and a setting where the algorithms learn from a batch of classified data, limiting interactions with the oracles.


2020 ◽  
Vol 12 (1) ◽  
pp. 1-25 ◽  
Author(s):  
Montserrat Hermo ◽  
Ana Ozaki
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document