scholarly journals LogDet Rank Minimization with Application to Subspace Clustering

2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Zhao Kang ◽  
Chong Peng ◽  
Jie Cheng ◽  
Qiang Cheng

Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet) function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.

2020 ◽  
Vol 34 (04) ◽  
pp. 3930-3937 ◽  
Author(s):  
Quanxue Gao ◽  
Wei Xia ◽  
Zhizhen Wan ◽  
Deyan Xie ◽  
Pu Zhang

Low-rank representation based on tensor-Singular Value Decomposition (t-SVD) has achieved impressive results for multi-view subspace clustering, but it does not well deal with noise and illumination changes embedded in multi-view data. The major reason is that all the singular values have the same contribution in tensor-nuclear norm based on t-SVD, which does not make sense in the existence of noise and illumination change. To improve the robustness and clustering performance, we study the weighted tensor-nuclear norm based on t-SVD and develop an efficient algorithm to optimize the weighted tensor-nuclear norm minimization (WTNNM) problem. We further apply the WTNNM algorithm to multi-view subspace clustering by exploiting the high order correlations embedded in different views. Extensive experimental results reveal that our WTNNM method is superior to several state-of-the-art multi-view subspace clustering methods in terms of performance.


2021 ◽  
Vol 15 ◽  
pp. 174830262199962
Author(s):  
Cong-Zhe You ◽  
Zhen-Qiu Shu ◽  
Hong-Hui Fan

Low-Rank Representation (LRR) and Sparse Subspace Clustering (SSC) are considered as the hot topics of subspace clustering algorithms. SSC induces the sparsity through minimizing the l1-norm of the data matrix while LRR promotes a low-rank structure through minimizing the nuclear norm. In this paper, considering the problem of fitting a union of subspace to a collection of data points drawn from one more subspaces and corrupted by noise, we pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise. We propose a new algorithm, named Low-Rank and Sparse Subspace Clustering with a Clean dictionary (LRS2C2), by combining SSC and LRR, as the representation is often both sparse and low-rank. The effectiveness of the proposed algorithm is demonstrated through experiments on motion segmentation and image clustering.


2021 ◽  
Vol 16 ◽  
pp. 155892502110084
Author(s):  
Chunlei Li ◽  
Ban Jiang ◽  
Zhoufeng Liu ◽  
Yan Dong ◽  
Shuili Tang ◽  
...  

In the process of textile production, automatic defect detection plays a key role in controlling product quality. Due to the complex texture features of fabric image, the traditional detection methods have poor adaptability, and low detection accuracy. The low rank representation model can divide the image into the low rank background and sparse object, and has proven suitable for fabric defect detection. However, how to further effectively characterize the fabric texture is still problematic in this kind of method. Moreover, most of them adopt nuclear norm optimization algorithm to solve the low rank model, which treat every singular value in the matrix equally. However, in the task of fabric defect detection, different singular values of feature matrix represent different information. In this paper, we proposed a novel fabric defect detection method based on the deep-handcrafted feature and weighted low-rank matrix representation. The feature characterization ability is effectively improved by fusing the global deep feature extracted by VGG network and the handcrafted low-level feature. Moreover, a weighted low-rank representation model is constructed to treat the matrix singular values differently by different weights, thus the most distinguishing feature of fabric texture can be preserved, which can efficiently outstand the defect and suppress the background. Qualitative and quantitative experiments on two public datasets show that our proposed method outperforms the state-of-the-art methods.


2019 ◽  
Vol 340 ◽  
pp. 211-221 ◽  
Author(s):  
Xian Fang ◽  
Zhixin Tie ◽  
Feiyang Song ◽  
Jialiang Yang

2021 ◽  
pp. 1-14
Author(s):  
Qingjiang Xiao ◽  
Shiqiang Du ◽  
Yao Yu ◽  
Yixuan Huang ◽  
Jinmei Song

In recent years, tensor-Singular Value Decomposition (t-SVD) based tensor nuclear norm has achieved remarkable progress in multi-view subspace clustering. However, most existing clustering methods still have the following shortcomings: (a) It has no meaning in practical applications for singular values to be treated equally. (b) They often ignore that data samples in the real world usually exist in multiple nonlinear subspaces. In order to solve the above shortcomings, we propose a hyper-Laplacian regularized multi-view subspace clustering model that joints representation learning and weighted tensor nuclear norm constraint, namely JWHMSC. Specifically, in the JWHMSC model, firstly, in order to capture the global structure between different views, the subspace representation matrices of all views are stacked into a low-rank constrained tensor. Secondly, hyper-Laplace graph regularization is adopted to preserve the local geometric structure embedded in the high-dimensional ambient space. Thirdly, considering the prior information of singular values, the weighted tensor nuclear norm (WTNN) based on t-SVD is introduced to treat singular values differently, which makes the JWHMSC more accurately obtain the sample distribution of classification information. Finally, representation learning, WTNN constraint and hyper-Laplacian graph regularization constraint are integrated into a framework to obtain the overall optimal solution of the algorithm. Compared with the state-of-the-art method, the experimental results on eight benchmark datasets show the good performance of the proposed method JWHMSC in multi-view clustering.


2021 ◽  
Vol 15 ◽  
pp. 174830262098369
Author(s):  
Cong-Zhe You ◽  
Zhen-Qiu Shu ◽  
Hong-Hui Fan

Low-Rank Representation (LRR) and Sparse Subspace Clustering (SSC) are considered as the hot topics of subspace clustering algorithms. SSC induces the sparsity through minimizing the [Formula: see text]-norm of the data matrix while LRR promotes a low-rank structure through minimizing the nuclear norm. In this paper, considering the problem of fitting a union of subspace to a collection of data points drawn from one more subspaces and corrupted by noise, we pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise. We propose a new algorithm, named Low-Rank and Sparse Subspace Clustering with a Clean dictionary (LRS2C2), by combining SSC and LRR, as the representation is often both sparse and low-rank. The effectiveness of the proposed algorithm is demonstrated through experiments on motion segmentation and image clustering.


2017 ◽  
Vol 127 ◽  
pp. 46-57 ◽  
Author(s):  
Jie Chen ◽  
Hua Mao ◽  
Yongsheng Sang ◽  
Zhang Yi

2021 ◽  
Vol 12 (4) ◽  
pp. 1-25
Author(s):  
Stanley Ebhohimhen Abhadiomhen ◽  
Zhiyang Wang ◽  
Xiangjun Shen ◽  
Jianping Fan

Multi-view subspace clustering (MVSC) finds a shared structure in latent low-dimensional subspaces of multi-view data to enhance clustering performance. Nonetheless, we observe that most existing MVSC methods neglect the diversity in multi-view data by considering only the common knowledge to find a shared structure either directly or by merging different similarity matrices learned for each view. In the presence of noise, this predefined shared structure becomes a biased representation of the different views. Thus, in this article, we propose a MVSC method based on coupled low-rank representation to address the above limitation. Our method first obtains a low-rank representation for each view, constrained to be a linear combination of the view-specific representation and the shared representation by simultaneously encouraging the sparsity of view-specific one. Then, it uses the k -block diagonal regularizer to learn a manifold recovery matrix for each view through respective low-rank matrices to recover more manifold structures from them. In this way, the proposed method can find an ideal similarity matrix by approximating clustering projection matrices obtained from the recovery structures. Hence, this similarity matrix denotes our clustering structure with exactly k connected components by applying a rank constraint on the similarity matrix’s relaxed Laplacian matrix to avoid spectral post-processing of the low-dimensional embedding matrix. The core of our idea is such that we introduce dynamic approximation into the low-rank representation to allow the clustering structure and the shared representation to guide each other to learn cleaner low-rank matrices that would lead to a better clustering structure. Therefore, our approach is notably different from existing methods in which the local manifold structure of data is captured in advance. Extensive experiments on six benchmark datasets show that our method outperforms 10 similar state-of-the-art compared methods in six evaluation metrics.


2020 ◽  
Author(s):  
Sajad Fathi Hafshejani ◽  
Saeed Vahidian ◽  
Zahra Moaberfard ◽  
Reza Alikhani ◽  
Bill Lin

Low-rank matrix factorization problems such as non negative matrix factorization (NMF) can be categorized as a clustering or dimension reduction technique. The latter denotes techniques designed to find representations of some high dimensional dataset in a lower dimensional manifold without a significant loss of information. If such a representation exists, the features ought to contain the most relevant features of the dataset. Many linear dimensionality reduction techniques can be formulated as a matrix factorization. In this paper, we combine the conjugate gradient (CG) method with the Barzilai and Borwein (BB) gradient method, and propose a BB scaling CG method for NMF problems. The new method does not require to compute and store matrices associated with Hessian of the objective functions. Moreover, adopting a suitable BB step size along with a proper nonmonotone strategy which comes by the size convex parameter $\eta_k$, results in a new algorithm that can significantly improve the CPU time, efficiency, the number of function evaluation. Convergence result is established and numerical comparisons of methods on both synthetic and real-world datasets show that the proposed method is efficient in comparison with existing methods and demonstrate the superiority of our algorithms.


Sign in / Sign up

Export Citation Format

Share Document