Affinity Matrix Learning through Subspace Clustering for Tolling Zone Definition

Author(s):  
Antonis F. Lentzakis ◽  
Ravi Seshadri ◽  
Moshe Ben-Akiva
2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Binbin Zhang ◽  
Weiwei Wang ◽  
Xiangchu Feng

Subspace clustering aims to group a set of data from a union of subspaces into the subspace from which it was drawn. It has become a popular method for recovering the low-dimensional structure underlying high-dimensional dataset. The state-of-the-art methods construct an affinity matrix based on the self-representation of the dataset and then use a spectral clustering method to obtain the final clustering result. These methods show that sparsity and grouping effect of the affinity matrix are important in recovering the low-dimensional structure. In this work, we propose a weighted sparse penalty and a weighted grouping effect penalty in modeling the self-representation of data points. The experimental results on Extended Yale B, USPS, and Berkeley 500 image segmentation datasets show that the proposed model is more effective than state-of-the-art methods in revealing the subspace structure underlying high-dimensional dataset.


2017 ◽  
Vol 89 ◽  
pp. 67-72 ◽  
Author(s):  
Daming Shi ◽  
Jun Wang ◽  
Dansong Cheng ◽  
Junbin Gao

Author(s):  
Yuanyuan Chen ◽  
Lei Zhang ◽  
Zhang Yi

Low rank representation (LRR) is widely used to construct a good affinity matrix to cluster data drawn from the union of multiple linear subspaces. However, it is not easy to solve the LRR problem in a closed form, and augmented Lagrange multiplier method (ALM) is usually applied. ALM takes a relative long time dealing with the real-world data. To solve the LRR problem efficiently, we propose an efficient low rank representation (eLRR) algorithm. Given a contaminated data set, we propose a novel way to solve the LRR of the data. We establish a useful theorem which directly gives an approximate solution to our LRR optimization problem. Thus, we can construct a good affinity matrix for subspace clustering. Experimental results with several public databases verify the efficiency and effectiveness of our method.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wenjuan Zhang ◽  
Xiangchu Feng ◽  
Feng Xiao ◽  
Yunmei Chen

Most sparse or low-rank-based subspace clustering methods divide the processes of getting the affinity matrix and the final clustering result into two independent steps. We propose to integrate the affinity matrix and the data labels into a minimization model. Thus, they can interact and promote each other and finally improve clustering performance. Furthermore, the block diagonal structure of the representation matrix is most preferred for subspace clustering. We define a folded concave penalty (FCP) based norm to approximate rank function and apply it to the combination of label matrix and representation vector. This FCP-based regularization term can enforce the block diagonal structure of the representation matrix effectively. We minimize the difference of l1 norm and l2 norm of the label vector to make it have only one nonzero element since one data only belong to one subspace. The index of that nonzero element is associated with the subspace from which the data come and can be determined by a variant of graph Laplacian regularization. We conduct experiments on several popular datasets. The results show our method has better clustering results than several state-of-the-art methods.


Author(s):  
John Lipor ◽  
David Hong ◽  
Yan Shuo Tan ◽  
Laura Balzano

Abstract Subspace clustering is the unsupervised grouping of points lying near a union of low-dimensional linear subspaces. Algorithms based directly on geometric properties of such data tend to either provide poor empirical performance, lack theoretical guarantees or depend heavily on their initialization. We present a novel geometric approach to the subspace clustering problem that leverages ensembles of the $K$-subspace (KSS) algorithm via the evidence accumulation clustering framework. Our algorithm, referred to as ensemble $K$-subspaces (EKSSs), forms a co-association matrix whose $(i,j)$th entry is the number of times points $i$ and $j$ are clustered together by several runs of KSS with random initializations. We prove general recovery guarantees for any algorithm that forms an affinity matrix with entries close to a monotonic transformation of pairwise absolute inner products. We then show that a specific instance of EKSS results in an affinity matrix with entries of this form, and hence our proposed algorithm can provably recover subspaces under similar conditions to state-of-the-art algorithms. The finding is, to the best of our knowledge, the first recovery guarantee for evidence accumulation clustering and for KSS variants. We show on synthetic data that our method performs well in the traditionally challenging settings of subspaces with large intersection, subspaces with small principal angles and noisy data. Finally, we evaluate our algorithm on six common benchmark datasets and show that unlike existing methods, EKSS achieves excellent empirical performance when there are both a small and large number of points per subspace.


2018 ◽  
Vol 355 (8) ◽  
pp. 3795-3811 ◽  
Author(s):  
Ming Yin ◽  
Zongze Wu ◽  
Deyu Zeng ◽  
Panshuo Li ◽  
Shengli Xie

Author(s):  
Boyue Wang ◽  
Yongli Hu ◽  
Junbin Gao ◽  
Yanfeng Sun ◽  
Baocai Yin

Inspired by low rank representation and sparse subspace clustering acquiring success, ones attempt to simultaneously perform low rank and sparse constraints on the affinity matrix to improve the performance. However, it is just a trade-off between these two constraints. In this paper, we propose a novel Cascaded Low Rank and Sparse Representation (CLRSR) method for subspace clustering, which seeks the sparse expression on the former learned low rank latent representation. To make our proposed method suitable to multi-dimension or imageset data, we extend CLRSR onto Grassmann manifolds. An effective solution and its convergence analysis are also provided. The excellent experimental results demonstrate the proposed method is more robust than other state-of-the-art clustering methods on imageset data.


Sign in / Sign up

Export Citation Format

Share Document