Dynamical intricacy and average sample complexity for random bundle transformations

2022 ◽  
Vol 63 (1) ◽  
pp. 012701
Author(s):  
Kexiang Yang ◽  
Ercai Chen ◽  
Xiaoyao Zhou
2017 ◽  
Vol 33 (3) ◽  
pp. 369-418 ◽  
Author(s):  
Karl Petersen ◽  
Benjamin Wilson

2021 ◽  
Vol 20 (8) ◽  
Author(s):  
Wooyeong Song ◽  
Marcin Wieśniak ◽  
Nana Liu ◽  
Marcin Pawłowski ◽  
Jinhyoung Lee ◽  
...  

2020 ◽  
Vol 415 ◽  
pp. 286-294
Author(s):  
Hassan Hafez-Kolahi ◽  
Shohreh Kasaei ◽  
Mahdiyeh Soleymani-Baghshah
Keyword(s):  

2012 ◽  
Vol 91 (1) ◽  
pp. 1-42 ◽  
Author(s):  
Lena Chekina ◽  
Dan Gutfreund ◽  
Aryeh Kontorovich ◽  
Lior Rokach ◽  
Bracha Shapira
Keyword(s):  

2008 ◽  
Vol 8 (3&4) ◽  
pp. 345-358
Author(s):  
M. Hayashi ◽  
A. Kawachi ◽  
H. Kobayashi

One of the central issues in the hidden subgroup problem is to bound the sample complexity, i.e., the number of identical samples of coset states sufficient and necessary to solve the problem. In this paper, we present general bounds for the sample complexity of the identification and decision versions of the hidden subgroup problem. As a consequence of the bounds, we show that the sample complexity for both of the decision and identification versions is $\Theta(\log|\HH|/\log p)$ for a candidate set $\HH$ of hidden subgroups in the case \REVISE{where the candidate nontrivial subgroups} have the same prime order $p$, which implies that the decision version is at least as hard as the identification version in this case. In particular, it does so for the important \REVISE{cases} such as the dihedral and the symmetric hidden subgroup problems. Moreover, the upper bound of the identification is attained \REVISE{by a variant of the pretty good measurement}. \REVISE{This implies that the concept of the pretty good measurement is quite useful for identification of hidden subgroups over an arbitrary group with optimal sample complexity}.


2018 ◽  
Vol 7 (3) ◽  
pp. 581-604 ◽  
Author(s):  
Armin Eftekhari ◽  
Michael B Wakin ◽  
Rachel A Ward

Abstract Leverage scores, loosely speaking, reflect the importance of the rows and columns of a matrix. Ideally, given the leverage scores of a rank-r matrix $M\in \mathbb{R}^{n\times n}$, that matrix can be reliably completed from just $O (rn\log ^{2}n )$ samples if the samples are chosen randomly from a non-uniform distribution induced by the leverage scores. In practice, however, the leverage scores are often unknown a priori. As such, the sample complexity in uniform matrix completion—using uniform random sampling—increases to $O(\eta (M)\cdot rn\log ^{2}n)$, where η(M) is the largest leverage score of M. In this paper, we propose a two-phase algorithm called MC2 for matrix completion: in the first phase, the leverage scores are estimated based on uniform random samples, and then in the second phase the matrix is resampled non-uniformly based on the estimated leverage scores and then completed. For well-conditioned matrices, the total sample complexity of MC2 is no worse than uniform matrix completion, and for certain classes of well-conditioned matrices—namely, reasonably coherent matrices whose leverage scores exhibit mild decay—MC2 requires substantially fewer samples. Numerical simulations suggest that the algorithm outperforms uniform matrix completion in a broad class of matrices and, in particular, is much less sensitive to the condition number than our theory currently requires.


Sign in / Sign up

Export Citation Format

Share Document