scholarly journals Learning Union of Integer Hypercubes with Queries

Author(s):  
Oliver Markgraf ◽  
Daniel Stan ◽  
Anthony W. Lin

AbstractWe study the problem of learning a finite union of integer (axis-aligned) hypercubes over the d-dimensional integer lattice, i.e., whose edges are parallel to the coordinate axes. This is a natural generalization of the classic problem in the computational learning theory of learning rectangles. We provide a learning algorithm with access to a minimally adequate teacher (i.e. membership and equivalence oracles) that solves this problem in polynomial-time, for any fixed dimension d. Over a non-fixed dimension, the problem subsumes the problem of learning DNF boolean formulas, a central open problem in the field. We have also provided extensions to handle infinite hypercubes in the union, as well as showing how subset queries could improve the performance of the learning algorithm in practice. Our problem has a natural application to the problem of monadic decomposition of quantifier-free integer linear arithmetic formulas, which has been actively studied in recent years. In particular, a finite union of integer hypercubes correspond to a finite disjunction of monadic predicates over integer linear arithmetic (without modulo constraints). Our experiments suggest that our learning algorithms substantially outperform the existing algorithms.

1996 ◽  
Vol 8 (7) ◽  
pp. 1341-1390 ◽  
Author(s):  
David H. Wolpert

This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower expected OTS error than B as vice versa, for loss functions like zero-one loss. In particular, this is true if A is cross-validation and B is “anti-cross-validation” (choose the learning algorithm with largest cross-validation error). This paper ends with a discussion of the implications of these results for computational learning theory. It is shown that one cannot say: if empirical misclassification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small. Other implications for “membership queries” algorithms and “punting” algorithms are also discussed.


2021 ◽  
Vol 1815 (1) ◽  
pp. 012014
Author(s):  
Xiaoguang Sheng ◽  
Qirui Yang ◽  
Yu Han ◽  
Ying Wang

2007 ◽  
Vol 7 (8) ◽  
pp. 730-737
Author(s):  
I.H. Kim

Fuchs and Sasaki defined the quantumness of a set of quantum states in \cite{Quantumness}, which is related to the fidelity loss in transmission of the quantum states through a classical channel. In \cite{Fuchs}, Fuchs showed that in $d$-dimensional Hilbert space, minimum quantumness is $\frac{2}{d+1}$, and this can be achieved by all rays in the space. He left an open problem, asking whether fewer than $d^2$ states can achieve this bound. Recently, in a different context, Scott introduced a concept of generalized $t$-design in \cite{GenSphet}, which is a natural generalization of spherical $t$-design. In this paper, we show that the lower bound on the quantumness can be achieved if and only if the states form a generalized 2-design. As a corollary, we show that this bound can be only achieved if the number of states are larger or equal to $d^2$, answering the open problem. Furthermore, we also show that the minimal set of such ensemble is Symmetric Informationally Complete POVM(SIC-POVM). This leads to an equivalence relation between SIC-POVM and minimal set of ensemble achieving minimal quantumness.


Sign in / Sign up

Export Citation Format

Share Document