computational learning theory
Recently Published Documents


TOTAL DOCUMENTS

76
(FIVE YEARS 9)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-28
Author(s):  
Ruyi Ji ◽  
Jingtao Xia ◽  
Yingfei Xiong ◽  
Zhenjiang Hu

The generalizability of PBE solvers is the key to the empirical synthesis performance. Despite the importance of generalizability, related studies on PBE solvers are still limited. In theory, few existing solvers provide theoretical guarantees on generalizability, and in practice, there is a lack of PBE solvers with satisfactory generalizability on important domains such as conditional linear integer arithmetic (CLIA). In this paper, we adopt a concept from the computational learning theory, Occam learning, and perform a comprehensive study on the framework of synthesis through unification (STUN), a state-of-the-art framework for synthesizing programs with nested if-then-else operators. We prove that Eusolver, a state-of-the-art STUN solver, does not satisfy the condition of Occam learning, and then we design a novel STUN solver, PolyGen, of which the generalizability is theoretically guaranteed by Occam learning. We evaluate PolyGen on the domains of CLIA and demonstrate that PolyGen significantly outperforms two state-of-the-art PBE solvers on CLIA, Eusolver and Euphony, on both generalizability and efficiency.


Author(s):  
Oliver Markgraf ◽  
Daniel Stan ◽  
Anthony W. Lin

AbstractWe study the problem of learning a finite union of integer (axis-aligned) hypercubes over the d-dimensional integer lattice, i.e., whose edges are parallel to the coordinate axes. This is a natural generalization of the classic problem in the computational learning theory of learning rectangles. We provide a learning algorithm with access to a minimally adequate teacher (i.e. membership and equivalence oracles) that solves this problem in polynomial-time, for any fixed dimension d. Over a non-fixed dimension, the problem subsumes the problem of learning DNF boolean formulas, a central open problem in the field. We have also provided extensions to handle infinite hypercubes in the union, as well as showing how subset queries could improve the performance of the learning algorithm in practice. Our problem has a natural application to the problem of monadic decomposition of quantifier-free integer linear arithmetic formulas, which has been actively studied in recent years. In particular, a finite union of integer hypercubes correspond to a finite disjunction of monadic predicates over integer linear arithmetic (without modulo constraints). Our experiments suggest that our learning algorithms substantially outperform the existing algorithms.


2020 ◽  
Vol 34 (3) ◽  
pp. 317-327 ◽  
Author(s):  
Ana Ozaki

Abstract The quest for acquiring a formal representation of the knowledge of a domain of interest has attracted researchers with various backgrounds into a diverse field called ontology learning. We highlight classical machine learning and data mining approaches that have been proposed for (semi-)automating the creation of description logic (DL) ontologies. These are based on association rule mining, formal concept analysis, inductive logic programming, computational learning theory, and neural networks. We provide an overview of each approach and how it has been adapted for dealing with DL ontologies. Finally, we discuss the benefits and limitations of each of them for learning DL ontologies.


2019 ◽  
Vol 5 (3) ◽  
pp. eaau1946 ◽  
Author(s):  
Andrea Rocchetto ◽  
Scott Aaronson ◽  
Simone Severini ◽  
Gonzalo Carvacho ◽  
Davide Poderini ◽  
...  

The number of parameters describing a quantum state is well known to grow exponentially with the number of particles. This scaling limits our ability to characterize and simulate the evolution of arbitrary states to systems, with no more than a few qubits. However, from a computational learning theory perspective, it can be shown that quantum states can be approximately learned using a number of measurements growing linearly with the number of qubits. Here, we experimentally demonstrate this linear scaling in optical systems with up to 6 qubits. Our results highlight the power of the computational learning theory to investigate quantum information, provide the first experimental demonstration that quantum states can be “probably approximately learned” with access to a number of copies of the state that scales linearly with the number of qubits, and pave the way to probing quantum states at new, larger scales.


2018 ◽  
Vol 18 (7&8) ◽  
pp. 541-552
Author(s):  
Andrea Rocchetto

The exponential scaling of the wave function is a fundamental property of quantum systems with far reaching implications in our ability to process quantum information. A problem where these are particularly relevant is quantum state tomography. State tomography, whose objective is to obtain an approximate description of a quantum system, can be analysed in the framework of computational learning theory. In this model, Aaronson (2007) showed that quantum states are Probably Approximately Correct (PAC)-learnable with sample complexity linear in the number of qubits. However, it is conjectured that in general quantum states require an exponential amount of computation to be learned. Here, using results from the literature on the efficient classical simulation of quantum systems, we show that stabiliser states are efficiently PAC-learnable. Our results solve an open problem formulated by Aaronson (2007) and establish a connection between classical simulation of quantum systems and efficient learnability.


Sign in / Sign up

Export Citation Format

Share Document