Schatten-q regularizer constrained low rank subspace clustering model

2016 ◽  
Vol 182 ◽  
pp. 36-47 ◽  
Author(s):  
Xiujun Zhang ◽  
Chen Xu ◽  
Xiaoli Sun ◽  
George Baciu
2021 ◽  
pp. 1-14
Author(s):  
Qingjiang Xiao ◽  
Shiqiang Du ◽  
Yao Yu ◽  
Yixuan Huang ◽  
Jinmei Song

In recent years, tensor-Singular Value Decomposition (t-SVD) based tensor nuclear norm has achieved remarkable progress in multi-view subspace clustering. However, most existing clustering methods still have the following shortcomings: (a) It has no meaning in practical applications for singular values to be treated equally. (b) They often ignore that data samples in the real world usually exist in multiple nonlinear subspaces. In order to solve the above shortcomings, we propose a hyper-Laplacian regularized multi-view subspace clustering model that joints representation learning and weighted tensor nuclear norm constraint, namely JWHMSC. Specifically, in the JWHMSC model, firstly, in order to capture the global structure between different views, the subspace representation matrices of all views are stacked into a low-rank constrained tensor. Secondly, hyper-Laplace graph regularization is adopted to preserve the local geometric structure embedded in the high-dimensional ambient space. Thirdly, considering the prior information of singular values, the weighted tensor nuclear norm (WTNN) based on t-SVD is introduced to treat singular values differently, which makes the JWHMSC more accurately obtain the sample distribution of classification information. Finally, representation learning, WTNN constraint and hyper-Laplacian graph regularization constraint are integrated into a framework to obtain the overall optimal solution of the algorithm. Compared with the state-of-the-art method, the experimental results on eight benchmark datasets show the good performance of the proposed method JWHMSC in multi-view clustering.


2021 ◽  
Author(s):  
Shuqin Wang ◽  
Yongyong Chen ◽  
Yigang Ce ◽  
Linna Zhang ◽  
Viacheslav Voronin

Author(s):  
Yongyong Chen ◽  
Xiaolin Xiao ◽  
Chong Peng ◽  
Guangming Lu ◽  
Yicong Zhou

2017 ◽  
Vol 127 ◽  
pp. 46-57 ◽  
Author(s):  
Jie Chen ◽  
Hua Mao ◽  
Yongsheng Sang ◽  
Zhang Yi

2021 ◽  
Vol 12 (4) ◽  
pp. 1-25
Author(s):  
Stanley Ebhohimhen Abhadiomhen ◽  
Zhiyang Wang ◽  
Xiangjun Shen ◽  
Jianping Fan

Multi-view subspace clustering (MVSC) finds a shared structure in latent low-dimensional subspaces of multi-view data to enhance clustering performance. Nonetheless, we observe that most existing MVSC methods neglect the diversity in multi-view data by considering only the common knowledge to find a shared structure either directly or by merging different similarity matrices learned for each view. In the presence of noise, this predefined shared structure becomes a biased representation of the different views. Thus, in this article, we propose a MVSC method based on coupled low-rank representation to address the above limitation. Our method first obtains a low-rank representation for each view, constrained to be a linear combination of the view-specific representation and the shared representation by simultaneously encouraging the sparsity of view-specific one. Then, it uses the k -block diagonal regularizer to learn a manifold recovery matrix for each view through respective low-rank matrices to recover more manifold structures from them. In this way, the proposed method can find an ideal similarity matrix by approximating clustering projection matrices obtained from the recovery structures. Hence, this similarity matrix denotes our clustering structure with exactly k connected components by applying a rank constraint on the similarity matrix’s relaxed Laplacian matrix to avoid spectral post-processing of the low-dimensional embedding matrix. The core of our idea is such that we introduce dynamic approximation into the low-rank representation to allow the clustering structure and the shared representation to guide each other to learn cleaner low-rank matrices that would lead to a better clustering structure. Therefore, our approach is notably different from existing methods in which the local manifold structure of data is captured in advance. Extensive experiments on six benchmark datasets show that our method outperforms 10 similar state-of-the-art compared methods in six evaluation metrics.


Sign in / Sign up

Export Citation Format

Share Document