scholarly journals A Two-Phase Algorithm for Robust Symmetric Non-Negative Matrix Factorization

Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1757
Author(s):  
Bingjie Li ◽  
Xi Shi ◽  
Zhenyue Zhang

As a special class of non-negative matrix factorization, symmetric non-negative matrix factorization (SymNMF) has been widely used in the machine learning field to mine the hidden non-linear structure of data. Due to the non-negative constraint and non-convexity of SymNMF, the efficiency of existing methods is generally unsatisfactory. To tackle this issue, we propose a two-phase algorithm to solve the SymNMF problem efficiently. In the first phase, we drop the non-negative constraint of SymNMF and propose a new model with penalty terms, in order to control the negative component of the factor. Unlike previous methods, the factor sequence in this phase is not required to be non-negative, allowing fast unconstrained optimization algorithms, such as the conjugate gradient method, to be used. In the second phase, we revisit the SymNMF problem, taking the non-negative part of the solution in the first phase as the initial point. To achieve faster convergence, we propose an interpolation projected gradient (IPG) method for SymNMF, which is much more efficient than the classical projected gradient method. Our two-phase algorithm is easy to implement, with convergence guaranteed for both phases. Numerical experiments show that our algorithm performs better than others on synthetic data and unsupervised clustering tasks.

Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 540
Author(s):  
Soodabeh Asadi ◽  
Janez Povh

This article uses the projected gradient method (PG) for a non-negative matrix factorization problem (NMF), where one or both matrix factors must have orthonormal columns or rows. We penalize the orthonormality constraints and apply the PG method via a block coordinate descent approach. This means that at a certain time one matrix factor is fixed and the other is updated by moving along the steepest descent direction computed from the penalized objective function and projecting onto the space of non-negative matrices. Our method is tested on two sets of synthetic data for various values of penalty parameters. The performance is compared to the well-known multiplicative update (MU) method from Ding (2006), and with a modified global convergent variant of the MU algorithm recently proposed by Mirzal (2014). We provide extensive numerical results coupled with appropriate visualizations, which demonstrate that our method is very competitive and usually outperforms the other two methods.


2019 ◽  
Vol 36 (02) ◽  
pp. 1940008
Author(s):  
Jun Fan ◽  
Liqun Wang ◽  
Ailing Yan

In this paper, we employ the sparsity-constrained least squares method to reconstruct sparse signals from the noisy measurements in high-dimensional case, and derive the existence of the optimal solution under certain conditions. We propose an inexact sparse-projected gradient method for numerical computation and discuss its convergence. Moreover, we present numerical results to demonstrate the efficiency of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document