scholarly journals Optimal sample complexity of subgradient descent for amplitude flow via non-Lipschitz matrix concentration

2021 ◽  
Vol 19 (7) ◽  
pp. 2035-2047
Author(s):  
Paul Hand ◽  
Oscar Leong ◽  
Vladislav Voroninski
2008 ◽  
Vol 8 (3&4) ◽  
pp. 345-358
Author(s):  
M. Hayashi ◽  
A. Kawachi ◽  
H. Kobayashi

One of the central issues in the hidden subgroup problem is to bound the sample complexity, i.e., the number of identical samples of coset states sufficient and necessary to solve the problem. In this paper, we present general bounds for the sample complexity of the identification and decision versions of the hidden subgroup problem. As a consequence of the bounds, we show that the sample complexity for both of the decision and identification versions is $\Theta(\log|\HH|/\log p)$ for a candidate set $\HH$ of hidden subgroups in the case \REVISE{where the candidate nontrivial subgroups} have the same prime order $p$, which implies that the decision version is at least as hard as the identification version in this case. In particular, it does so for the important \REVISE{cases} such as the dihedral and the symmetric hidden subgroup problems. Moreover, the upper bound of the identification is attained \REVISE{by a variant of the pretty good measurement}. \REVISE{This implies that the concept of the pretty good measurement is quite useful for identification of hidden subgroups over an arbitrary group with optimal sample complexity}.


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Shelby Kimmel ◽  
Cedric Yen-Yu Lin ◽  
Guang Hao Low ◽  
Maris Ozols ◽  
Theodore J. Yoder

2018 ◽  
Vol 8 (3) ◽  
pp. 577-619 ◽  
Author(s):  
Navid Ghadermarzy ◽  
Yaniv Plan ◽  
Özgür Yilmaz

Abstract We study the problem of estimating a low-rank tensor when we have noisy observations of a subset of its entries. A rank-$r$, order-$d$, $N \times N \times \cdots \times N$ tensor, where $r=O(1)$, has $O(dN)$ free variables. On the other hand, prior to our work, the best sample complexity that was achieved in the literature is $O\left(N^{\frac{d}{2}}\right)$, obtained by solving a tensor nuclear-norm minimization problem. In this paper, we consider the ‘M-norm’, an atomic norm whose atoms are rank-1 sign tensors. We also consider a generalization of the matrix max-norm to tensors, which results in a quasi-norm that we call ‘max-qnorm’. We prove that solving an M-norm constrained least squares (LS) problem results in nearly optimal sample complexity for low-rank tensor completion (TC). A similar result holds for max-qnorm as well. Furthermore, we show that these bounds are nearly minimax rate-optimal. We also provide promising numerical results for max-qnorm constrained TC, showing improved recovery compared to matricization and alternating LS.


2020 ◽  
Vol 67 (6) ◽  
pp. 1-42
Author(s):  
Hassan Ashtiani ◽  
Shai Ben-David ◽  
Nicholas J. A. Harvey ◽  
Christopher Liaw ◽  
Abbas Mehrabian ◽  
...  

Author(s):  
Junyu Zhang ◽  
Lin Xiao ◽  
Shuzhong Zhang

The cubic regularized Newton method of Nesterov and Polyak has become increasingly popular for nonconvex optimization because of its capability of finding an approximate local solution with a second order guarantee and its low iteration complexity. Several recent works extend this method to the setting of minimizing the average of N smooth functions by replacing the exact gradients and Hessians with subsampled approximations. It is shown that the total Hessian sample complexity can be reduced to be sublinear in N per iteration by leveraging stochastic variance reduction techniques. We present an adaptive variance reduction scheme for a subsampled Newton method with cubic regularization and show that the expected Hessian sample complexity is [Formula: see text] for finding an [Formula: see text]-approximate local solution (in terms of first and second order guarantees, respectively). Moreover, we show that the same Hessian sample complexity is retained with fixed sample sizes if exact gradients are used. The techniques of our analysis are different from previous works in that we do not rely on high probability bounds based on matrix concentration inequalities. Instead, we derive and utilize new bounds on the third and fourth order moments of the average of random matrices, which are of independent interest on their own.


2019 ◽  
Vol 18 (05) ◽  
pp. 861-886
Author(s):  
Huiping Li ◽  
Song Li ◽  
Yu Xia

In this paper, we consider the noisy phase retrieval problem which occurs in many different areas of science and physics. The PhaseMax algorithm is an efficient convex method to tackle with phase retrieval problem. On the basis of this algorithm, we propose two kinds of extended formulations of the PhaseMax algorithm, namely, PhaseMax with bounded and non-negative noise and PhaseMax with outliers to deal with the phase retrieval problem under different noise corruptions. Then we prove that these extended algorithms can stably recover real signals from independent sub-Gaussian measurements under optimal sample complexity. Specially, such results remain valid in noiseless case. As we can see, these results guarantee that a broad range of random measurements such as Bernoulli measurements with erasures can be applied to reconstruct the original signals by these extended PhaseMax algorithms. Finally, we demonstrate the effectiveness of our extended PhaseMax algorithm through numerical simulations. We find that with the same initialization, extended PhaseMax algorithm outperforms Truncated Wirtinger Flow method, and recovers the signal with corrupted measurements robustly.


2016 ◽  
Vol 64 (21) ◽  
pp. 5549-5556 ◽  
Author(s):  
Yanjun Li ◽  
Kiryung Lee ◽  
Yoram Bresler

Sign in / Sign up

Export Citation Format

Share Document