Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks

Author(s):  
Daniel Povey ◽  
Gaofeng Cheng ◽  
Yiming Wang ◽  
Ke Li ◽  
Hainan Xu ◽  
...  
Author(s):  
Yinlei Hu ◽  
Bin Li ◽  
Falai Chen ◽  
Kun Qu

Abstract Unsupervised clustering is a fundamental step of single-cell RNA sequencing data analysis. This issue has inspired several clustering methods to classify cells in single-cell RNA sequencing data. However, accurate prediction of the cell clusters remains a substantial challenge. In this study, we propose a new algorithm for single-cell RNA sequencing data clustering based on Sparse Optimization and low-rank matrix factorization (scSO). We applied our scSO algorithm to analyze multiple benchmark datasets and showed that the cluster number predicted by scSO was close to the number of reference cell types and that most cells were correctly classified. Our scSO algorithm is available at https://github.com/QuKunLab/scSO. Overall, this study demonstrates a potent cell clustering approach that can help researchers distinguish cell types in single-cell RNA sequencing data.


Algorithmica ◽  
2009 ◽  
Vol 56 (3) ◽  
pp. 313-332 ◽  
Author(s):  
Epameinondas Fritzilas ◽  
Martin Milanič ◽  
Sven Rahmann ◽  
Yasmin A. Rios-Solis

2020 ◽  
Author(s):  
Sajad Fathi Hafshejani ◽  
Saeed Vahidian ◽  
Zahra Moaberfard ◽  
Reza Alikhani ◽  
Bill Lin

Low-rank matrix factorization problems such as non negative matrix factorization (NMF) can be categorized as a clustering or dimension reduction technique. The latter denotes techniques designed to find representations of some high dimensional dataset in a lower dimensional manifold without a significant loss of information. If such a representation exists, the features ought to contain the most relevant features of the dataset. Many linear dimensionality reduction techniques can be formulated as a matrix factorization. In this paper, we combine the conjugate gradient (CG) method with the Barzilai and Borwein (BB) gradient method, and propose a BB scaling CG method for NMF problems. The new method does not require to compute and store matrices associated with Hessian of the objective functions. Moreover, adopting a suitable BB step size along with a proper nonmonotone strategy which comes by the size convex parameter $\eta_k$, results in a new algorithm that can significantly improve the CPU time, efficiency, the number of function evaluation. Convergence result is established and numerical comparisons of methods on both synthetic and real-world datasets show that the proposed method is efficient in comparison with existing methods and demonstrate the superiority of our algorithms.


2019 ◽  
Vol 335 ◽  
pp. 143-152 ◽  
Author(s):  
Shengxiang Gao ◽  
Zhengtao Yu ◽  
Taisong Jin ◽  
Ming Yin

Sign in / Sign up

Export Citation Format

Share Document