Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds

Author(s):  
Jiho Yoo ◽  
Seungjin Choi
2012 ◽  
Vol 24 (4) ◽  
pp. 1085-1105 ◽  
Author(s):  
Nicolas Gillis ◽  
François Glineur

Nonnegative matrix factorization (NMF) is a data analysis technique used in a great variety of applications such as text mining, image processing, hyperspectral data analysis, computational biology, and clustering. In this letter, we consider two well-known algorithms designed to solve NMF problems: the multiplicative updates of Lee and Seung and the hierarchical alternating least squares of Cichocki et al. We propose a simple way to significantly accelerate these schemes, based on a careful analysis of the computational cost needed at each iteration, while preserving their convergence properties. This acceleration technique can also be applied to other algorithms, which we illustrate on the projected gradient method of Lin. The efficiency of the accelerated algorithms is empirically demonstrated on image and text data sets and compares favorably with a state-of-the-art alternating nonnegative least squares algorithm.


2011 ◽  
Vol 23 (9) ◽  
pp. 2421-2456 ◽  
Author(s):  
Cédric Févotte ◽  
Jérôme Idier

This letter describes algorithms for nonnegative matrix factorization (NMF) with the β-divergence (β-NMF). The β-divergence is a family of cost functions parameterized by a single shape parameter β that takes the Euclidean distance, the Kullback-Leibler divergence, and the Itakura-Saito divergence as special cases (β = 2, 1, 0 respectively). The proposed algorithms are based on a surrogate auxiliary function (a local majorization of the criterion function). We first describe a majorization-minimization algorithm that leads to multiplicative updates, which differ from standard heuristic multiplicative updates by a β-dependent power exponent. The monotonicity of the heuristic algorithm can, however, be proven for β ∈ (0, 1) using the proposed auxiliary function. Then we introduce the concept of the majorization-equalization (ME) algorithm, which produces updates that move along constant level sets of the auxiliary function and lead to larger steps than MM. Simulations on synthetic and real data illustrate the faster convergence of the ME approach. The letter also describes how the proposed algorithms can be adapted to two common variants of NMF: penalized NMF (when a penalty function of the factors is added to the criterion function) and convex NMF (when the dictionary is assumed to belong to a known subspace).


2014 ◽  
Vol 26 (6) ◽  
pp. 1128-1168 ◽  
Author(s):  
Karthik Devarajan ◽  
Vincent C. K. Cheung

Nonnegative matrix factorization (NMF) by the multiplicative updates algorithm is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into two nonnegative matrices, W and H, where [Formula: see text]. It has been successfully applied in the analysis and interpretation of large-scale data arising in neuroscience, computational biology, and natural language processing, among other areas. A distinctive feature of NMF is its nonnegativity constraints that allow only additive linear combinations of the data, thus enabling it to learn parts that have distinct physical representations in reality. In this letter, we describe an information-theoretic approach to NMF for signal-dependent noise based on the generalized inverse gaussian model. Specifically, we propose three novel algorithms in this setting, each based on multiplicative updates, and prove monotonicity of updates using the EM algorithm. In addition, we develop algorithm-specific measures to evaluate their goodness of fit on data. Our methods are demonstrated using experimental data from electromyography studies, as well as simulated data in the extraction of muscle synergies, and compared with existing algorithms for signal-dependent noise.


Sign in / Sign up

Export Citation Format

Share Document