Bayesian inference for adaptive low rank and sparse matrix estimation

2018 ◽  
Vol 291 ◽  
pp. 71-83 ◽  
Author(s):  
Xixi Jia ◽  
Xiangchu Feng ◽  
Weiwei Wang ◽  
Chen Xu ◽  
Lei Zhang
2021 ◽  
Author(s):  
Christian Borgs ◽  
Jennifer T. Chayes ◽  
Devavrat Shah ◽  
Christina Lee Yu

Matrix estimation or completion has served as a canonical mathematical model for recommendation systems. More recently, it has emerged as a fundamental building block for data analysis as a first step to denoise the observations and predict missing values. Since the dawn of e-commerce, similarity-based collaborative filtering has been used as a heuristic for matrix etimation. At its core, it encodes typical human behavior: you ask your friends to recommend what you may like or dislike. Algorithmically, friends are similar “rows” or “columns” of the underlying matrix. The traditional heuristic for computing similarities between rows has costly requirements on the density of observed entries. In “Iterative Collaborative Filtering for Sparse Matrix Estimation” by Christian Borgs, Jennifer T. Chayes, Devavrat Shah, and Christina Lee Yu, the authors introduce an algorithm that computes similarities in sparse datasets by comparing expanded local neighborhoods in the associated data graph: in effect, you ask friends of your friends to recommend what you may like or dislike. This work provides bounds on the max entry-wise error of their estimate for low rank and approximately low rank matrices, which is stronger than the aggregate mean squared error bounds found in classical works. The algorithm is also interpretable, scalable, and amenable to distributed implementation.


Author(s):  
Sampurna Biswas ◽  
Sunrita Poddar ◽  
Soura Dasgupta ◽  
Raghuraman Mudumbai ◽  
Mathews Jacob

2014 ◽  
Vol 3 (2) ◽  
pp. 231-250 ◽  
Author(s):  
Sheng-Long Zhou ◽  
Nai-Hua Xiu ◽  
Zi-Yan Luo ◽  
Ling-Chen Kong

2018 ◽  
Vol 15 (8) ◽  
pp. 118-125
Author(s):  
Junsheng Mu ◽  
Xiaojun Jing ◽  
Hai Huang ◽  
Ning Gao

Axioms ◽  
2018 ◽  
Vol 7 (3) ◽  
pp. 51 ◽  
Author(s):  
Carmela Scalone ◽  
Nicola Guglielmi

In this article we present and discuss a two step methodology to find the closest low rank completion of a sparse large matrix. Given a large sparse matrix M, the method consists of fixing the rank to r and then looking for the closest rank-r matrix X to M, where the distance is measured in the Frobenius norm. A key element in the solution of this matrix nearness problem consists of the use of a constrained gradient system of matrix differential equations. The obtained results, compared to those obtained by different approaches show that the method has a correct behaviour and is competitive with the ones available in the literature.


Sign in / Sign up

Export Citation Format

Share Document