DLRF-Net: A Progressive Deep Latent Low-Rank Fusion Network for Hierarchical Subspace Discovery

Author(s):  
Zhao Zhang ◽  
Jiahuan Ren ◽  
Haijun Zhang ◽  
Zheng Zhang ◽  
Guangcan Liu ◽  
...  

Low-rank coding-based representation learning is powerful for discovering and recovering the subspace structures in data, which has obtained an impressive performance; however, it still cannot obtain deep hidden information due to the essence of single-layer structures. In this article, we investigate the deep low-rank representation of images in a progressive way by presenting a novel strategy that can extend existing single-layer latent low-rank models into multiple layers. Technically, we propose a new progressive Deep Latent Low-Rank Fusion Network (DLRF-Net) to uncover deep features and the clustering structures embedded in latent subspaces. The basic idea of DLRF-Net is to progressively refine the principal and salient features in each layer from previous layers by fusing the clustering and projective subspaces, respectively, which can potentially learn more accurate features and subspaces. To obtain deep hidden information, DLRF-Net inputs shallow features from the last layer into subsequent layers. Then, it aims at recovering the hierarchical information and deeper features by respectively congregating the subspaces in each layer of the network. As such, one can also ensure the representation learning of deeper layers to remove the noise and discover the underlying clean subspaces, which will be verified by simulations. It is noteworthy that the framework of our DLRF-Net is general and is applicable to most existing latent low-rank representation models, i.e., existing latent low-rank models can be easily extended to the multilayer scenario using DLRF-Net. Extensive results on real databases show that our framework can deliver enhanced performance over other related techniques.

Author(s):  
Zhao Zhang ◽  
Jiahuan Ren ◽  
Zheng Zhang ◽  
Guangcan Liu

Low-rank representation is powerful for recover-ing and clustering the subspace structures, but it cannot obtain deep hierarchical information due to the single-layer mode. In this paper, we present a new and effective strategy to extend the sin-gle-layer latent low-rank models into multi-ple-layers, and propose a new and progressive Deep Latent Low-Rank Fusion Network (DLRF-Net) to uncover deep features and struc-tures embedded in input data. The basic idea of DLRF-Net is to refine features progressively from the previous layers by fusing the subspaces in each layer, which can potentially obtain accurate fea-tures and subspaces for representation. To learn deep information, DLRF-Net inputs shallow fea-tures of the last layers into subsequent layers. Then, it recovers the deeper features and hierar-chical information by congregating the projective subspaces and clustering subspaces respectively in each layer. Thus, one can learn hierarchical sub-spaces, remove noise and discover the underlying clean subspaces. Note that most existing latent low-rank coding models can be extended to multi-layers using DLRF-Net. Extensive results show that our network can deliver enhanced perfor-mance over other related frameworks.


2018 ◽  
Vol 27 (1) ◽  
pp. 335-348 ◽  
Author(s):  
Bo Li ◽  
Risheng Liu ◽  
Junjie Cao ◽  
Jie Zhang ◽  
Yu-Kun Lai ◽  
...  

2015 ◽  
Vol 27 (9) ◽  
pp. 1915-1950 ◽  
Author(s):  
Hongyang Zhang ◽  
Zhouchen Lin ◽  
Chao Zhang ◽  
Junbin Gao

Recovering intrinsic low-dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, a lot of work has modeled subspace recovery as low-rank minimization problems. We find that some representative models, such as robust principal component analysis (R-PCA), robust low-rank representation (R-LRR), and robust latent low-rank representation (R-LatLRR), are actually deeply connected. More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations. Since R-PCA is the simplest, our discovery makes it the center of low-rank subspace recovery models. Our work has two important implications. First, R-PCA has a solid theoretical foundation. Under certain conditions, we could find globally optimal solutions to these low-rank models at an overwhelming probability, although these models are nonconvex. Second, we can obtain significantly faster algorithms for these models by solving R-PCA first. The computation cost can be further cut by applying low-complexity randomized algorithms, for example, our novel [Formula: see text] filtering algorithm, to R-PCA. Although for the moment the formal proof of our [Formula: see text] filtering algorithm is not yet available, experiments verify the advantages of our algorithm over other state-of-the-art methods based on the alternating direction method.


2018 ◽  
Vol 55 (7) ◽  
pp. 071002
Author(s):  
褚晶辉 Chu Jinghui ◽  
顾慧敏 Gu Huimin ◽  
苏育挺 Su Yuting

2020 ◽  
Vol 10 ◽  
Author(s):  
Conghai Lu ◽  
Juan Wang ◽  
Jinxing Liu ◽  
Chunhou Zheng ◽  
Xiangzhen Kong ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document