A Corrected Tensor Nuclear Norm Minimization Method for Noisy Low-Rank Tensor Completion

2019 ◽  
Vol 12 (2) ◽  
pp. 1231-1273 ◽  
Author(s):  
Xiongjun Zhang ◽  
Michael K. Ng
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 131888-131901
Author(s):  
Xi-Le Zhao ◽  
Xin Nie ◽  
Yu-Bang Zheng ◽  
Teng-Yu Ji ◽  
Ting-Zhu Huang

Author(s):  
Holger Rauhut ◽  
Željka Stojanac

AbstractWe study extensions of compressive sensing and low rank matrix recovery to the recovery of tensors of low rank from incomplete linear information. While the reconstruction of low rank matrices via nuclear norm minimization is rather well-understand by now, almost no theory is available so far for the extension to higher order tensors due to various theoretical and computational difficulties arising for tensor decompositions. In fact, nuclear norm minimization for matrix recovery is a tractable convex relaxation approach, but the extension of the nuclear norm to tensors is in general NP-hard to compute. In this article, we introduce convex relaxations of the tensor nuclear norm which are computable in polynomial time via semidefinite programming. Our approach is based on theta bodies, a concept from real computational algebraic geometry which is similar to the one of the better known Lasserre relaxations. We introduce polynomial ideals which are generated by the second-order minors corresponding to different matricizations of the tensor (where the tensor entries are treated as variables) such that the nuclear norm ball is the convex hull of the algebraic variety of the ideal. The theta body of order k for such an ideal generates a new norm which we call the θk-norm. We show that in the matrix case, these norms reduce to the standard nuclear norm. For tensors of order three or higher however, we indeed obtain new norms. The sequence of the corresponding unit-θk-norm balls converges asymptotically to the unit tensor nuclear norm ball. By providing the Gröbner basis for the ideals, we explicitly give semidefinite programs for the computation of the θk-norm and for the minimization of the θk-norm under an affine constraint. Finally, numerical experiments for order-three tensor recovery via θ1-norm minimization suggest that our approach successfully reconstructs tensors of low rank from incomplete linear (random) measurements.


2018 ◽  
Vol 8 (3) ◽  
pp. 577-619 ◽  
Author(s):  
Navid Ghadermarzy ◽  
Yaniv Plan ◽  
Özgür Yilmaz

Abstract We study the problem of estimating a low-rank tensor when we have noisy observations of a subset of its entries. A rank-$r$, order-$d$, $N \times N \times \cdots \times N$ tensor, where $r=O(1)$, has $O(dN)$ free variables. On the other hand, prior to our work, the best sample complexity that was achieved in the literature is $O\left(N^{\frac{d}{2}}\right)$, obtained by solving a tensor nuclear-norm minimization problem. In this paper, we consider the ‘M-norm’, an atomic norm whose atoms are rank-1 sign tensors. We also consider a generalization of the matrix max-norm to tensors, which results in a quasi-norm that we call ‘max-qnorm’. We prove that solving an M-norm constrained least squares (LS) problem results in nearly optimal sample complexity for low-rank tensor completion (TC). A similar result holds for max-qnorm as well. Furthermore, we show that these bounds are nearly minimax rate-optimal. We also provide promising numerical results for max-qnorm constrained TC, showing improved recovery compared to matricization and alternating LS.


Sign in / Sign up

Export Citation Format

Share Document