scholarly journals Enhancing Matrix Completion Using a Modified Second-Order Total Variation

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Wendong Wang ◽  
Jianjun Wang

In this paper, we propose a new method to deal with the matrix completion problem. Different from most existing matrix completion methods that only pursue the low rank of underlying matrices, the proposed method simultaneously optimizes their low rank and smoothness such that they mutually help each other and hence yield a better performance. In particular, the proposed method becomes very competitive with the introduction of a modified second-order total variation, even when it is compared with some recently emerged matrix completion methods that also combine the low rank and smoothness priors of matrices together. An efficient algorithm is developed to solve the induced optimization problem. The extensive experiments further confirm the superior performance of the proposed method over many state-of-the-art methods.

2019 ◽  
Vol 17 (05) ◽  
pp. 689-713
Author(s):  
Xueying Zeng ◽  
Lixin Shen ◽  
Yuesheng Xu ◽  
Jian Lu

The low rank matrix completion problem which aims to recover a matrix from that having missing entries has received much attention in many fields such as image processing and machine learning. The rank of a matrix may be measured by the [Formula: see text] norm of the vector of its singular values. Due to the nonconvexity and discontinuity of the [Formula: see text] norm, solving the low rank matrix completion problem which is clearly NP hard suffers from computational challenges. In this paper, we propose a constrained matrix completion model in which a novel nonconvex continuous rank surrogate is used to approximate the rank function of a matrix, promote low rank of the recovered matrix and address the computational challenges. The proposed rank surrogate differs from the convex nuclear norm and other existing state-of-the-art nonconvex surrogates in a way that it alleviates the discontinuity and nonconvexity of the rank function through a local [Formula: see text]-relaxation of the [Formula: see text] norm so that it possesses several desirable properties. These properties ensure that it accurately approximates the rank function by choosing an appropriate relaxation parameter. We moreover develop an efficient iterative algorithm to solve the resulting model. We also propose strategies of automatically updating the relaxation parameter to practically ensure the global convergence and speed up the algorithm. We establish theoretical convergence results for the proposed algorithm. Experimental results are presented to demonstrate significant performance improvements of the proposed model and the associated algorithm as compared to state-of-the-art methods in both recoverability and computational efficiency.


2020 ◽  
Vol 34 (04) ◽  
pp. 3906-3913
Author(s):  
Robert Ganian ◽  
Iyad Kanj ◽  
Sebastian Ordyniak ◽  
Stefan Szeider

We consider a fundamental matrix completion problem where we are given an incomplete matrix and a set of constraints modeled as a CSP instance. The goal is to complete the matrix subject to the input constraints and in such a way that the complete matrix can be clustered into few subspaces with low rank. This problem generalizes several problems in data mining and machine learning, including the problem of completing a matrix into one with minimum rank. In addition to its ubiquitous applications in machine learning, the problem has strong connections to information theory, related to binary linear codes, and variants of it have been extensively studied from that perspective. We formalize the problem mentioned above and study its classical and parameterized complexity. We draw a detailed landscape of the complexity and parameterized complexity of the problem with respect to several natural parameters that are desirably small and with respect to several well-studied CSP fragments.


Author(s):  
Jean Walrand

AbstractOnline learning algorithms update their estimates as additional observations are made. Section 12.1 explains a simple example: online linear regression. The stochastic gradient projection algorithm is a general technique to update estimates based on additional observations; it is widely used in machine learning. Section 12.2 presents the theory behind that algorithm. When analyzing large amounts of data, one faces the problems of identifying the most relevant data and of how to use efficiently the available data. Section 12.3 explains three examples of how these questions are addressed: the LASSO algorithm, compressed sensing, and the matrix completion problem. Section 12.4 discusses deep neural networks for which the stochastic gradient projection algorithm is easy to implement.


2021 ◽  
Author(s):  
Ren Wang ◽  
Pengzhi Gao ◽  
Meng Wang

Abstract This paper studies the robust matrix completion problem for time-varying models. Leveraging the low-rank property and the temporal information of the data, we develop novel methods to recover the original data from partially observed and corrupted measurements. We show that the reconstruction performance can be improved if one further leverages the information of the sparse corruptions in addition to the temporal correlations among a sequence of matrices. The dynamic robust matrix completion problem is formulated as a nonconvex optimization problem, and the recovery error is quantified analytically and proved to decay in the same order as that of the state-of-the-art method when there is no corruption. A fast iterative algorithm with convergence guarantee to the stationary point is proposed to solve the nonconvex problem. Experiments on synthetic data and real video dataset demonstrate the effectiveness of our method.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Minhui Wang ◽  
Chang Tang ◽  
Jiajia Chen

Drug-target interactions play an important role for biomedical drug discovery and development. However, it is expensive and time-consuming to accomplish this task by experimental determination. Therefore, developing computational techniques for drug-target interaction prediction is urgent and has practical significance. In this work, we propose an effective computational model of dual Laplacian graph regularized matrix completion, referred to as DLGRMC briefly, to infer the unknown drug-target interactions. Specifically, DLGRMC transforms the task of drug-target interaction prediction into a matrix completion problem, in which the potential interactions between drugs and targets can be obtained based on the prediction scores after the matrix completion procedure. In DLGRMC, the drug pairwise chemical structure similarities and the target pairwise genomic sequence similarities are fully exploited to serve the matrix completion by using a dual Laplacian graph regularization term; i.e., drugs with similar chemical structure are more likely to have interactions with similar targets and targets with similar genomic sequence similarity are more likely to have interactions with similar drugs. In addition, during the matrix completion process, an indicator matrix with binary values which indicates the indices of the observed drug-target interactions is deployed to preserve the experimental confirmed interactions. Furthermore, we develop an alternative iterative strategy to solve the constrained matrix completion problem based on Augmented Lagrange Multiplier algorithm. We evaluate DLGRMC on five benchmark datasets and the results show that DLGRMC outperforms several state-of-the-art approaches in terms of 10-fold cross validation based AUPR values and PR curves. In addition, case studies also demonstrate that DLGRMC can successfully predict most of the experimental validated drug-target interactions.


Author(s):  
Juan Geng ◽  
Laisheng Wang ◽  
Xiuyu Wang

AbstractIn the matrix completion problem, most methods to solve the nuclear norm model are relaxing it to the nuclear norm regularized least squares problem. In this paper, we propose a new unconstraint model for matrix completion problem based on nuclear norm and indicator function and design a proximal point algorithm (PPA-IF) to solve it. Then the convergence of our algorithm is established strictly. Finally, we report numerical results for solving noiseless and noisy matrix completion problems and image reconstruction.


Sign in / Sign up

Export Citation Format

Share Document