Multiple kernel low-rank representation-based robust multi-view subspace clustering

Author(s):  
Xiaoqian Zhang ◽  
Zhenwen Ren ◽  
Huaijiang Sun ◽  
Keqiang Bai ◽  
Xinghua Feng ◽  
...  
2017 ◽  
Vol 127 ◽  
pp. 46-57 ◽  
Author(s):  
Jie Chen ◽  
Hua Mao ◽  
Yongsheng Sang ◽  
Zhang Yi

2021 ◽  
Vol 12 (4) ◽  
pp. 1-25
Author(s):  
Stanley Ebhohimhen Abhadiomhen ◽  
Zhiyang Wang ◽  
Xiangjun Shen ◽  
Jianping Fan

Multi-view subspace clustering (MVSC) finds a shared structure in latent low-dimensional subspaces of multi-view data to enhance clustering performance. Nonetheless, we observe that most existing MVSC methods neglect the diversity in multi-view data by considering only the common knowledge to find a shared structure either directly or by merging different similarity matrices learned for each view. In the presence of noise, this predefined shared structure becomes a biased representation of the different views. Thus, in this article, we propose a MVSC method based on coupled low-rank representation to address the above limitation. Our method first obtains a low-rank representation for each view, constrained to be a linear combination of the view-specific representation and the shared representation by simultaneously encouraging the sparsity of view-specific one. Then, it uses the k -block diagonal regularizer to learn a manifold recovery matrix for each view through respective low-rank matrices to recover more manifold structures from them. In this way, the proposed method can find an ideal similarity matrix by approximating clustering projection matrices obtained from the recovery structures. Hence, this similarity matrix denotes our clustering structure with exactly k connected components by applying a rank constraint on the similarity matrix’s relaxed Laplacian matrix to avoid spectral post-processing of the low-dimensional embedding matrix. The core of our idea is such that we introduce dynamic approximation into the low-rank representation to allow the clustering structure and the shared representation to guide each other to learn cleaner low-rank matrices that would lead to a better clustering structure. Therefore, our approach is notably different from existing methods in which the local manifold structure of data is captured in advance. Extensive experiments on six benchmark datasets show that our method outperforms 10 similar state-of-the-art compared methods in six evaluation metrics.


2020 ◽  
Vol 24 (20) ◽  
pp. 15317-15326
Author(s):  
Xiaofang Liu ◽  
Jun Wang ◽  
Dansong Cheng ◽  
Daming Shi ◽  
Yongqiang Zhang

2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Wenjia Niu ◽  
Kewen Xia ◽  
Baokai Zu ◽  
Jianchuan Bai

Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.


2015 ◽  
Vol 21 (6) ◽  
pp. 1569-1581 ◽  
Author(s):  
Wu He ◽  
Jim X. Chen ◽  
Weihua Zhang

Sign in / Sign up

Export Citation Format

Share Document