scholarly journals Truncated Nuclear Norm Minimization for Image Restoration Based on Iterative Support Detection

2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Yilun Wang ◽  
Xinhua Su

Recovering a large matrix from limited measurements is a challenging task arising in many real applications, such as image inpainting, compressive sensing, and medical imaging, and these kinds of problems are mostly formulated as low-rank matrix approximation problems. Due to the rank operator being nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation and the low-rank matrix recovery problem is solved through minimization of the nuclear norm regularized problem. However, a major limitation of nuclear norm minimization is that all the singular values are simultaneously minimized and the rank may not be well approximated (Hu et al., 2013). Correspondingly, in this paper, we propose a new multistage algorithm, which makes use of the concept of Truncated Nuclear Norm Regularization (TNNR) proposed by Hu et al., 2013, and iterative support detection (ISD) proposed by Wang and Yin, 2010, to overcome the above limitation. Besides matrix completion problems considered by Hu et al., 2013, the proposed method can be also extended to the general low-rank matrix recovery problems. Extensive experiments well validate the superiority of our new algorithms over other state-of-the-art methods.

2013 ◽  
Vol 718-720 ◽  
pp. 2308-2313
Author(s):  
Lu Liu ◽  
Wei Huang ◽  
Di Rong Chen

Minimizing the nuclear norm is recently considered as the convex relaxation of the rank minimization problem and arises in many applications as Netflix challenge. A closest nonconvex relaxation - Schatten norm minimization has been proposed to replace the NP hard rank minimization. In this paper, an algorithm based on Majorization Minimization has be proposed to solve Schatten norm minimization. The numerical experiments show that Schatten norm with recovers low rank matrix from fewer measurements than nuclear norm minimization. The numerical results also indicate that our algorithm give a more accurate reconstruction


Author(s):  
Shuang Li ◽  
Hassan Mansour ◽  
Michael B Wakin

Abstract One of the classical approaches for estimating the frequencies and damping factors in a spectrally sparse signal is the MUltiple SIgnal Classification (MUSIC) algorithm, which exploits the low-rank structure of an autocorrelation matrix. Low-rank matrices have also received considerable attention recently in the context of optimization algorithms with partial observations, and nuclear norm minimization (NNM) has been widely used as a popular heuristic of rank minimization for low-rank matrix recovery problems. On the other hand, it has been shown that NNM can be viewed as a special case of atomic norm minimization (ANM), which has achieved great success in solving line spectrum estimation problems. However, as far as we know, the general ANM (not NNM) considered in many existing works can only handle frequency estimation in undamped sinusoids. In this work, we aim to fill this gap and deal with damped spectrally sparse signal recovery problems. In particular, inspired by the dual analysis used in ANM, we offer a novel optimization-based perspective on the classical MUSIC algorithm and propose an algorithm for spectral estimation that involves searching for the peaks of the dual polynomial corresponding to a certain NNM problem, and we show that this algorithm is in fact equivalent to MUSIC itself. Building on this connection, we also extend the classical MUSIC algorithm to the missing data case. We provide exact recovery guarantees for our proposed algorithms and quantify how the sample complexity depends on the true spectral parameters. In particular, we provide a parameter-specific recovery bound for low-rank matrix recovery of jointly sparse signals rather than use certain incoherence properties as in existing literature. Simulation results also indicate that the proposed algorithms significantly outperform some relevant existing methods (e.g., ANM) in frequency estimation of damped exponentials.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Lingchen Kong ◽  
Levent Tunçel ◽  
Naihua Xiu

Low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, and system identification and control. This class of optimization problems is generally𝒩𝒫hard. A popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend and characterize the concept ofs-goodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski (Math Program, 2011)) to linear transformations in LMR. Using the two characteristics-goodness constants,γsandγ^s, of a linear transformation, we derive necessary and sufficient conditions for a linear transformation to bes-good. Moreover, we establish the equivalence ofs-goodness and the null space properties. Therefore,s-goodness is a necessary and sufficient condition for exacts-rank matrix recovery via the nuclear norm minimization.


Author(s):  
Holger Rauhut ◽  
Željka Stojanac

AbstractWe study extensions of compressive sensing and low rank matrix recovery to the recovery of tensors of low rank from incomplete linear information. While the reconstruction of low rank matrices via nuclear norm minimization is rather well-understand by now, almost no theory is available so far for the extension to higher order tensors due to various theoretical and computational difficulties arising for tensor decompositions. In fact, nuclear norm minimization for matrix recovery is a tractable convex relaxation approach, but the extension of the nuclear norm to tensors is in general NP-hard to compute. In this article, we introduce convex relaxations of the tensor nuclear norm which are computable in polynomial time via semidefinite programming. Our approach is based on theta bodies, a concept from real computational algebraic geometry which is similar to the one of the better known Lasserre relaxations. We introduce polynomial ideals which are generated by the second-order minors corresponding to different matricizations of the tensor (where the tensor entries are treated as variables) such that the nuclear norm ball is the convex hull of the algebraic variety of the ideal. The theta body of order k for such an ideal generates a new norm which we call the θk-norm. We show that in the matrix case, these norms reduce to the standard nuclear norm. For tensors of order three or higher however, we indeed obtain new norms. The sequence of the corresponding unit-θk-norm balls converges asymptotically to the unit tensor nuclear norm ball. By providing the Gröbner basis for the ideals, we explicitly give semidefinite programs for the computation of the θk-norm and for the minimization of the θk-norm under an affine constraint. Finally, numerical experiments for order-three tensor recovery via θ1-norm minimization suggest that our approach successfully reconstructs tensors of low rank from incomplete linear (random) measurements.


Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. V21-V32 ◽  
Author(s):  
Zhao Liu ◽  
Jianwei Ma ◽  
Xueshan Yong

Prestack seismic data denoising is an important step in seismic processing due to the development of prestack time migration. Reduced-rank filtering is a state-of-the-art method for prestack seismic denoising that uses predictability between neighbor traces for each single frequency. Different from the original way of embedding low-rank matrix based on the Hankel or Toeplitz transform, we have developed a new multishot gathers joint denoising method in a line survey, which used a new way of rearranging data to a matrix with low rank. Inspired by video denoising, each single-shot record in the line survey can be viewed as a frame in the video sequence. Due to high redundancy and similar event structure among the shot gathers, similar patches can be selected from different shot gathers in the line survey to rearrange a low-rank matrix. Then, seismic denoising is formulated into a low-rank minimization problem that can be further relaxed into a nuclear-norm minimization problem. A fast algorithm, called the orthogonal rank-one matrix pursuit, is used to solve the nuclear-norm minimization. Using this method avoids the computation of a full singular value decomposition. Our method is validated using synthetic and field data, in comparison with [Formula: see text] deconvolution and singular spectrum analysis methods.


Sign in / Sign up

Export Citation Format

Share Document