scholarly journals Robust Asymmetric Bayesian Adaptive Matrix Factorization

Author(s):  
Xin Guo ◽  
Boyuan Pan ◽  
Deng Cai ◽  
Xiaofei He

Low rank matrix factorizations(LRMF) have attracted much attention due to its wide range of applications in computer vision, such as image impainting and video denoising. Most of the existing methods assume that the loss between an observed measurement matrix and its bilinear factorization follows symmetric distribution, like gaussian or gamma families. However, in real-world situations, this assumption is often found too idealized, because pictures under various illumination and angles may suffer from multi-peaks, asymmetric and irregular noises. To address these problems, this paper assumes that the loss follows a mixture of Asymmetric Laplace distributions and proposes robust Asymmetric Laplace Adaptive Matrix Factorization model(ALAMF) under bayesian matrix factorization framework. The assumption of Laplace distribution makes our model more robust and the asymmetric attribute makes our model more flexible and adaptable to real-world noise. A variational method is then devised for model inference. We compare ALAMF with other state-of-the-art matrix factorization methods both on data sets ranging from synthetic and real-world application. The experimental results demonstrate the effectiveness of our proposed approach.

Author(s):  
Kavitha G L

We deal with real world images which contains numerous faces captioned with equivalent names, it may be wrongly annotated. The face naming technique that we propose, exploits the weakly labeled image dataset, and aims at labeling a face in the image accurately. We propose this efficient face naming technique which is self regulated and aims at correctly labeling a face in an image. This is a challenging task because of the very large appearance variation in the images, as well as the potential mismatch between images and their captions. This paper introduces a method called Refined Low-Rank Regularization (RLRR) which productively employs the weakly named image information to determine a low-rank matrix which is obtained by examining many subspace structures of the recreated data. From the recreation method used a discriminatory matrix is deduced. Also, Large Margin Nearest Neighbor (LMNN) method is used to label an image, which further leads to another kernel matrix, based on the Mahalanobis distances of the data and the two consistent facial matrices can be fused to enhance the quality of each other and it is used as a new reiterative method to infer the names of each facial image. Experimental results on synthetic and real world data sets validate the effectiveness of the proposed method.


Author(s):  
K Sobha Rani

Collaborative filtering suffers from the problems of data sparsity and cold start, which dramatically degrade recommendation performance. To help resolve these issues, we propose TrustSVD, a trust-based matrix factorization technique. By analyzing the social trust data from four real-world data sets, we conclude that not only the explicit but also the implicit influence of both ratings and trust should be taken into consideration in a recommendation model. Hence, we build on top of a state-of-the-art recommendation algorithm SVD++ which inherently involves the explicit and implicit influence of rated items, by further incorporating both the explicit and implicit influence of trusted users on the prediction of items for an active user. To our knowledge, the work reported is the first to extend SVD++ with social trust information. Experimental results on the four data sets demonstrate that our approach TrustSVD achieves better accuracy than other ten counterparts, and can better handle the concerned issues.


Author(s):  
Daniel Povey ◽  
Gaofeng Cheng ◽  
Yiming Wang ◽  
Ke Li ◽  
Hainan Xu ◽  
...  

Author(s):  
Yinlei Hu ◽  
Bin Li ◽  
Falai Chen ◽  
Kun Qu

Abstract Unsupervised clustering is a fundamental step of single-cell RNA sequencing data analysis. This issue has inspired several clustering methods to classify cells in single-cell RNA sequencing data. However, accurate prediction of the cell clusters remains a substantial challenge. In this study, we propose a new algorithm for single-cell RNA sequencing data clustering based on Sparse Optimization and low-rank matrix factorization (scSO). We applied our scSO algorithm to analyze multiple benchmark datasets and showed that the cluster number predicted by scSO was close to the number of reference cell types and that most cells were correctly classified. Our scSO algorithm is available at https://github.com/QuKunLab/scSO. Overall, this study demonstrates a potent cell clustering approach that can help researchers distinguish cell types in single-cell RNA sequencing data.


Algorithmica ◽  
2009 ◽  
Vol 56 (3) ◽  
pp. 313-332 ◽  
Author(s):  
Epameinondas Fritzilas ◽  
Martin Milanič ◽  
Sven Rahmann ◽  
Yasmin A. Rios-Solis

2020 ◽  
Author(s):  
Sajad Fathi Hafshejani ◽  
Saeed Vahidian ◽  
Zahra Moaberfard ◽  
Reza Alikhani ◽  
Bill Lin

Low-rank matrix factorization problems such as non negative matrix factorization (NMF) can be categorized as a clustering or dimension reduction technique. The latter denotes techniques designed to find representations of some high dimensional dataset in a lower dimensional manifold without a significant loss of information. If such a representation exists, the features ought to contain the most relevant features of the dataset. Many linear dimensionality reduction techniques can be formulated as a matrix factorization. In this paper, we combine the conjugate gradient (CG) method with the Barzilai and Borwein (BB) gradient method, and propose a BB scaling CG method for NMF problems. The new method does not require to compute and store matrices associated with Hessian of the objective functions. Moreover, adopting a suitable BB step size along with a proper nonmonotone strategy which comes by the size convex parameter $\eta_k$, results in a new algorithm that can significantly improve the CPU time, efficiency, the number of function evaluation. Convergence result is established and numerical comparisons of methods on both synthetic and real-world datasets show that the proposed method is efficient in comparison with existing methods and demonstrate the superiority of our algorithms.


Author(s):  
Jun Zhou ◽  
Longfei Li ◽  
Ziqi Liu ◽  
Chaochao Chen

Recently, Factorization Machine (FM) has become more and more popular for recommendation systems due to its effectiveness in finding informative interactions between features. Usually, the weights for the interactions are learned as a low rank weight matrix, which is formulated as an inner product of two low rank matrices. This low rank matrix can help improve the generalization ability of Factorization Machine. However, to choose the rank properly, it usually needs to run the algorithm for many times using different ranks, which clearly is inefficient for some large-scale datasets. To alleviate this issue, we propose an Adaptive Boosting framework of Factorization Machine (AdaFM), which can adaptively search for proper ranks for different datasets without re-training. Instead of using a fixed rank for FM, the proposed algorithm will gradually increase its rank according to its performance until the performance does not grow. Extensive experiments are conducted to validate the proposed method on multiple large-scale datasets. The experimental results demonstrate that the proposed method can be more effective than the state-of-the-art Factorization Machines.


Sign in / Sign up

Export Citation Format

Share Document