scholarly journals Probabilistic Non-Negative Matrix Factorization with Binary Components

Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1189
Author(s):  
Xindi Ma ◽  
Jie Gao ◽  
Xiaoyu Liu ◽  
Taiping Zhang ◽  
Yuanyan Tang

Non-negative matrix factorization is used to find a basic matrix and a weight matrix to approximate the non-negative matrix. It has proven to be a powerful low-rank decomposition technique for non-negative multivariate data. However, its performance largely depends on the assumption of a fixed number of features. This work proposes a new probabilistic non-negative matrix factorization which factorizes a non-negative matrix into a low-rank factor matrix with constraints and a non-negative weight matrix. In order to automatically learn the potential binary features and feature number, a deterministic Indian buffet process variational inference is introduced to obtain the binary factor matrix. Further, the weight matrix is set to satisfy the exponential prior. To obtain the real posterior distribution of the two factor matrices, a variational Bayesian exponential Gaussian inference model is established. The comparative experiments on the synthetic and real-world datasets show the efficacy of the proposed method.

2018 ◽  
Vol 20 (10) ◽  
pp. 2659-2669 ◽  
Author(s):  
Jiandong Tian ◽  
Zhi Han ◽  
Weihong Ren ◽  
Xiai Chen ◽  
Yandong Tang

2020 ◽  
Author(s):  
Sajad Fathi Hafshejani ◽  
Saeed Vahidian ◽  
Zahra Moaberfard ◽  
Reza Alikhani ◽  
Bill Lin

Low-rank matrix factorization problems such as non negative matrix factorization (NMF) can be categorized as a clustering or dimension reduction technique. The latter denotes techniques designed to find representations of some high dimensional dataset in a lower dimensional manifold without a significant loss of information. If such a representation exists, the features ought to contain the most relevant features of the dataset. Many linear dimensionality reduction techniques can be formulated as a matrix factorization. In this paper, we combine the conjugate gradient (CG) method with the Barzilai and Borwein (BB) gradient method, and propose a BB scaling CG method for NMF problems. The new method does not require to compute and store matrices associated with Hessian of the objective functions. Moreover, adopting a suitable BB step size along with a proper nonmonotone strategy which comes by the size convex parameter $\eta_k$, results in a new algorithm that can significantly improve the CPU time, efficiency, the number of function evaluation. Convergence result is established and numerical comparisons of methods on both synthetic and real-world datasets show that the proposed method is efficient in comparison with existing methods and demonstrate the superiority of our algorithms.


2017 ◽  
Vol 19 (5) ◽  
pp. 969-983 ◽  
Author(s):  
Hengyou Wang ◽  
Yigang Cen ◽  
Zhihai He ◽  
Ruizhen Zhao ◽  
Yi Cen ◽  
...  

Author(s):  
Chen Chen ◽  
Baochang Zhang ◽  
Alessio Del Bue ◽  
Vittorio Murino

Sign in / Sign up

Export Citation Format

Share Document