scholarly journals Linear Total Variation Approximate Regularized Nuclear Norm Optimization for Matrix Completion

2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Xu Han ◽  
Jiasong Wu ◽  
Lu Wang ◽  
Yang Chen ◽  
Lotfi Senhadji ◽  
...  

Matrix completion that estimates missing values in visual data is an important topic in computer vision. Most of the recent studies focused on the low rank matrix approximation via the nuclear norm. However, the visual data, such as images, is rich in texture which may not be well approximated by low rank constraint. In this paper, we propose a novel matrix completion method, which combines the nuclear norm with the local geometric regularizer to solve the problem of matrix completion for redundant texture images. And in this paper we mainly consider one of the most commonly graph regularized parameters: the total variation norm which is a widely used measure for enforcing intensity continuity and recovering a piecewise smooth image. The experimental results show that the encouraging results can be obtained by the proposed method on real texture images compared to the state-of-the-art methods.

2018 ◽  
Vol 68 ◽  
pp. 76-87 ◽  
Author(s):  
Jing Dong ◽  
Zhichao Xue ◽  
Jian Guan ◽  
Zi-Fa Han ◽  
Wenwu Wang

2013 ◽  
Vol 756-759 ◽  
pp. 3977-3981 ◽  
Author(s):  
Hua Xing Yu ◽  
Xiao Fei Zhang ◽  
Jian Feng Li ◽  
De Ben

In this paper, we address the angle estimation problem in linear array with some ill sensors (partially-well sensors), which only work well randomly. The output of the array will miss some values, and this can be regarded as a low-rank matrix completion problem due to the property that the number of sources is smaller than the number of the total sensors. The output of the array, which is corrupted by the missing values and the noise, can be complete via the Optspace method, and then the angles can be estimated according to the complete output. The proposed algorithm works well for the array with some ill sensors; moreover, it is suitable for non-uniform linear array. Simulation results illustrate performance of the algorithm.


Author(s):  
Andrew D McRae ◽  
Mark A Davenport

Abstract This paper considers the problem of estimating a low-rank matrix from the observation of all or a subset of its entries in the presence of Poisson noise. When we observe all entries, this is a problem of matrix denoising; when we observe only a subset of the entries, this is a problem of matrix completion. In both cases, we exploit an assumption that the underlying matrix is low-rank. Specifically, we analyse several estimators, including a constrained nuclear-norm minimization program, nuclear-norm regularized least squares and a non-convex constrained low-rank optimization problem. We show that for all three estimators, with high probability, we have an upper error bound (in the Frobenius norm error metric) that depends on the matrix rank, the fraction of the elements observed and the maximal row and column sums of the true matrix. We furthermore show that the above results are minimax optimal (within a universal constant) in classes of matrices with low-rank and bounded row and column sums. We also extend these results to handle the case of matrix multinomial denoising and completion.


2011 ◽  
Vol 39 (5) ◽  
pp. 2302-2329 ◽  
Author(s):  
Vladimir Koltchinskii ◽  
Karim Lounici ◽  
Alexandre B. Tsybakov

Author(s):  
Takeshi Teshima ◽  
Miao Xu ◽  
Issei Sato ◽  
Masashi Sugiyama

We consider the problem of recovering a low-rank matrix from its clipped observations. Clipping is conceivable in many scientific areas that obstructs statistical analyses. On the other hand, matrix completion (MC) methods can recover a low-rank matrix from various information deficits by using the principle of low-rank completion. However, the current theoretical guarantees for low-rank MC do not apply to clipped matrices, as the deficit depends on the underlying values. Therefore, the feasibility of clipped matrix completion (CMC) is not trivial. In this paper, we first provide a theoretical guarantee for the exact recovery of CMC by using a trace-norm minimization algorithm. Furthermore, we propose practical CMC algorithms by extending ordinary MC methods. Our extension is to use the squared hinge loss in place of the squared loss for reducing the penalty of overestimation on clipped entries. We also propose a novel regularization term tailored for CMC. It is a combination of two trace-norm terms, and we theoretically bound the recovery error under the regularization. We demonstrate the effectiveness of the proposed methods through experiments using both synthetic and benchmark data for recommendation systems.


Sign in / Sign up

Export Citation Format

Share Document