scholarly journals Online Low-Rank Representation Learning for Joint Multi-Subspace Recovery and Clustering

2018 ◽  
Vol 27 (1) ◽  
pp. 335-348 ◽  
Author(s):  
Bo Li ◽  
Risheng Liu ◽  
Junjie Cao ◽  
Jie Zhang ◽  
Yu-Kun Lai ◽  
...  
2018 ◽  
Vol 55 (7) ◽  
pp. 071002
Author(s):  
褚晶辉 Chu Jinghui ◽  
顾慧敏 Gu Huimin ◽  
苏育挺 Su Yuting

Author(s):  
Zhao Zhang ◽  
Jiahuan Ren ◽  
Haijun Zhang ◽  
Zheng Zhang ◽  
Guangcan Liu ◽  
...  

Low-rank coding-based representation learning is powerful for discovering and recovering the subspace structures in data, which has obtained an impressive performance; however, it still cannot obtain deep hidden information due to the essence of single-layer structures. In this article, we investigate the deep low-rank representation of images in a progressive way by presenting a novel strategy that can extend existing single-layer latent low-rank models into multiple layers. Technically, we propose a new progressive Deep Latent Low-Rank Fusion Network (DLRF-Net) to uncover deep features and the clustering structures embedded in latent subspaces. The basic idea of DLRF-Net is to progressively refine the principal and salient features in each layer from previous layers by fusing the clustering and projective subspaces, respectively, which can potentially learn more accurate features and subspaces. To obtain deep hidden information, DLRF-Net inputs shallow features from the last layer into subsequent layers. Then, it aims at recovering the hierarchical information and deeper features by respectively congregating the subspaces in each layer of the network. As such, one can also ensure the representation learning of deeper layers to remove the noise and discover the underlying clean subspaces, which will be verified by simulations. It is noteworthy that the framework of our DLRF-Net is general and is applicable to most existing latent low-rank representation models, i.e., existing latent low-rank models can be easily extended to the multilayer scenario using DLRF-Net. Extensive results on real databases show that our framework can deliver enhanced performance over other related techniques.


2020 ◽  
Vol 10 ◽  
Author(s):  
Conghai Lu ◽  
Juan Wang ◽  
Jinxing Liu ◽  
Chunhou Zheng ◽  
Xiangzhen Kong ◽  
...  

2018 ◽  
Vol 27 (07) ◽  
pp. 1860013 ◽  
Author(s):  
Swair Shah ◽  
Baokun He ◽  
Crystal Maung ◽  
Haim Schweitzer

Principal Component Analysis (PCA) is a classical dimensionality reduction technique that computes a low rank representation of the data. Recent studies have shown how to compute this low rank representation from most of the data, excluding a small amount of outlier data. We show how to convert this problem into graph search, and describe an algorithm that solves this problem optimally by applying a variant of the A* algorithm to search for the outliers. The results obtained by our algorithm are optimal in terms of accuracy, and are shown to be more accurate than results obtained by the current state-of-the- art algorithms which are shown not to be optimal. This comes at the cost of running time, which is typically slower than the current state of the art. We also describe a related variant of the A* algorithm that runs much faster than the optimal variant and produces a solution that is guaranteed to be near the optimal. This variant is shown experimentally to be more accurate than the current state-of-the-art and has a comparable running time.


Sign in / Sign up

Export Citation Format

Share Document