An accelerated IRNN-Iteratively Reweighted Nuclear Norm algorithm for nonconvex nonsmooth low-rank minimization problems

Author(s):  
Duy Nhat Phan ◽  
Thuy Ngoc Nguyen
2016 ◽  
Vol 25 (2) ◽  
pp. 829-839 ◽  
Author(s):  
Canyi Lu ◽  
Jinhui Tang ◽  
Shuicheng Yan ◽  
Zhouchen Lin

Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. V21-V32 ◽  
Author(s):  
Zhao Liu ◽  
Jianwei Ma ◽  
Xueshan Yong

Prestack seismic data denoising is an important step in seismic processing due to the development of prestack time migration. Reduced-rank filtering is a state-of-the-art method for prestack seismic denoising that uses predictability between neighbor traces for each single frequency. Different from the original way of embedding low-rank matrix based on the Hankel or Toeplitz transform, we have developed a new multishot gathers joint denoising method in a line survey, which used a new way of rearranging data to a matrix with low rank. Inspired by video denoising, each single-shot record in the line survey can be viewed as a frame in the video sequence. Due to high redundancy and similar event structure among the shot gathers, similar patches can be selected from different shot gathers in the line survey to rearrange a low-rank matrix. Then, seismic denoising is formulated into a low-rank minimization problem that can be further relaxed into a nuclear-norm minimization problem. A fast algorithm, called the orthogonal rank-one matrix pursuit, is used to solve the nuclear-norm minimization. Using this method avoids the computation of a full singular value decomposition. Our method is validated using synthetic and field data, in comparison with [Formula: see text] deconvolution and singular spectrum analysis methods.


2020 ◽  
Author(s):  
Yunyi Li ◽  
Li Liu ◽  
Yu Zhao ◽  
Xiefeng Cheng ◽  
Guan Gui

Group sparse representation (GSR) based method has led to great successes in various image recovery tasks, which can be converted into a low-rank matrix minimization problem. As a widely used surrogate function of low-rank, the nuclear norm based convex surrogate usually leads to over-shrinking problem, since the standard soft-thresholding operator shrinks all singular values equally. To improve traditional sparse representation based image compressive sensing (CS) performance, we propose a generalized CS framework based on GSR model, leading to a nonconvex nonsmooth low-rank minimization problem. The popular -norm and M-estimator are employed for standard image CS and robust CS problem to fit the data respectively. For the better approximation of the rank of group-matrix, a family of nuclear norms are employed to address the over-shrinking problem. Moreover, we also propose a flexible and effective iteratively-weighting strategy to control the weighting and contribution of each singular value. Then we develop an iteratively reweighted nuclear norm algorithm for our generalized framework via an alternating direction method of multipliers framework, namely, GSR-ADMM-IRNN. Experimental results demonstrate that our proposed CS framework can achieve favorable reconstruction performance compared with current state-of-the-art methods and the RCS framework can suppress the outliers effectively.


2013 ◽  
Vol 718-720 ◽  
pp. 2308-2313
Author(s):  
Lu Liu ◽  
Wei Huang ◽  
Di Rong Chen

Minimizing the nuclear norm is recently considered as the convex relaxation of the rank minimization problem and arises in many applications as Netflix challenge. A closest nonconvex relaxation - Schatten norm minimization has been proposed to replace the NP hard rank minimization. In this paper, an algorithm based on Majorization Minimization has be proposed to solve Schatten norm minimization. The numerical experiments show that Schatten norm with recovers low rank matrix from fewer measurements than nuclear norm minimization. The numerical results also indicate that our algorithm give a more accurate reconstruction


2020 ◽  
pp. 1-19
Author(s):  
Yun Cai

This paper considers block sparse recovery and rank minimization problems from incomplete linear measurements. We study the weighted [Formula: see text] [Formula: see text] norms as a nonconvex metric for recovering block sparse signals and low-rank matrices. Based on the block [Formula: see text]-restricted isometry property (abbreviated as block [Formula: see text]-RIP) and matrix [Formula: see text]-RIP, we prove that the weighted [Formula: see text] minimization can guarantee the exact recovery for block sparse signals and low-rank matrices. We also give the stable recovery results for approximately block sparse signals and approximately low-rank matrices in noisy measurements cases. Our results give the theoretical support for block sparse recovery and rank minimization problems.


2020 ◽  
Author(s):  
Yunyi Li ◽  
Li Liu ◽  
Yu Zhao ◽  
Xiefeng Cheng ◽  
Guan Gui

Group sparse representation (GSR) based method has led to great successes in various image recovery tasks, which can be converted into a low-rank matrix minimization problem. As a widely used surrogate function of low-rank, the nuclear norm based convex surrogate usually leads to over-shrinking problem, since the standard soft-thresholding operator shrinks all singular values equally. To improve traditional sparse representation based image compressive sensing (CS) performance, we propose a generalized CS framework based on GSR model, leading to a nonconvex nonsmooth low-rank minimization problem. The popular -norm and M-estimator are employed for standard image CS and robust CS problem to fit the data respectively. For the better approximation of the rank of group-matrix, a family of nuclear norms are employed to address the over-shrinking problem. Moreover, we also propose a flexible and effective iteratively-weighting strategy to control the weighting and contribution of each singular value. Then we develop an iteratively reweighted nuclear norm algorithm for our generalized framework via an alternating direction method of multipliers framework, namely, GSR-ADMM-IRNN. Experimental results demonstrate that our proposed CS framework can achieve favorable reconstruction performance compared with current state-of-the-art methods and the RCS framework can suppress the outliers effectively.


2016 ◽  
Vol 2016 ◽  
pp. 1-13
Author(s):  
Wanping Yang ◽  
Jinkai Zhao ◽  
Fengmin Xu

The constrained rank minimization problem has various applications in many fields including machine learning, control, and signal processing. In this paper, we consider the convex constrained rank minimization problem. By introducing a new variable and penalizing an equality constraint to objective function, we reformulate the convex objective function with a rank constraint as a difference of convex functions based on the closed-form solutions, which can be reformulated as DC programming. A stepwise linear approximative algorithm is provided for solving the reformulated model. The performance of our method is tested by applying it to affine rank minimization problems and max-cut problems. Numerical results demonstrate that the method is effective and of high recoverability and results on max-cut show that the method is feasible, which provides better lower bounds and lower rank solutions compared with improved approximation algorithm using semidefinite programming, and they are close to the results of the latest researches.


2018 ◽  
Vol 35 (11) ◽  
pp. 1549-1566 ◽  
Author(s):  
Zhichao Xue ◽  
Jing Dong ◽  
Yuxin Zhao ◽  
Chang Liu ◽  
Ryad Chellali

Sign in / Sign up

Export Citation Format

Share Document