Discriminative Subspace Learning for Cross-view Classification with Simultaneous Local and Global Alignment

Author(s):  
Ao Li ◽  
Yu Ding ◽  
Deyun Chen ◽  
Guanglu Sun ◽  
Hailong Jiang
Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Ao Li ◽  
Yu Ding ◽  
Xunjiang Zheng ◽  
Deyun Chen ◽  
Guanglu Sun ◽  
...  

Recently, cross-view feature learning has been a hot topic in machine learning due to the wide applications of multiview data. Nevertheless, the distribution discrepancy between cross-views leads to the fact that instances of the different views from same class are farther than those within the same view but from different classes. To address this problem, in this paper, we develop a novel cross-view discriminative feature subspace learning method inspired by layered visual perception from human. Firstly, the proposed method utilizes a separable low-rank self-representation model to disentangle the class and view structure layers, respectively. Secondly, a local alignment is constructed with two designed graphs to guide the subspace decomposition in a pairwise way. Finally, the global discriminative constraint on distribution center in each view is designed for further alignment improvement. Extensive cross-view classification experiments on several public datasets prove that our proposed method is more effective than other existing feature learning methods.


2020 ◽  
Vol 50 (5) ◽  
pp. 2138-2151 ◽  
Author(s):  
Dong Zhang ◽  
Yunlian Sun ◽  
Qiaolin Ye ◽  
Jinhui Tang

Author(s):  
Haoliang Yuan ◽  
Loi Lei Lai

Subspace learning (SL) is an important technology to extract the discriminative features for hyperspectral image (HSI) classification. However, in practical applications, some acquired HSIs are contaminated with considerable noise during the imaging process. In this case, most of existing SL methods yield limited performance for subsequent classification procedure. In this paper, we propose a robust subspace learning (RSL) method, which utilizes a local linear regression and a supervised regularization function simultaneously. To effectively incorporate the spatial information, a local linear regression is used to seek the recovered data from the noisy data under a spatial set. The recovered data not only reduce the noise effect but also include the spectral-spatial information. To utilize the label information, a supervised regularization function based on the idea of Fisher criterion is used to learn a discriminative subspace from the recovered data. To optimize RSL, we develop an efficient iterative algorithm. Extensive experimental results demonstrate that RSL greatly outperforms many existing SL methods when the HSI data contain considerable noise.


Author(s):  
Kan Xie ◽  
Wei Liu ◽  
Yue Lai ◽  
Weijun Li

Subspace learning has been widely utilized to extract discriminative features for classification task, such as face recognition, even when facial images are occluded or corrupted. However, the performance of most existing methods would be degraded significantly in the scenario of that data being contaminated with severe noise, especially when the magnitude of the gross corruption can be arbitrarily large. To this end, in this paper, a novel discriminative subspace learning method is proposed based on the well-known low-rank representation (LRR). Specifically, a discriminant low-rank representation and the projecting subspace are learned simultaneously, in a supervised way. To avoid the deviation from the original solution by using some relaxation, we adopt the Schatten [Formula: see text]-norm and [Formula: see text]-norm, instead of the nuclear norm and [Formula: see text]-norm, respectively. Experimental results on two famous databases, i.e. PIE and ORL, demonstrate that the proposed method achieves better classification scores than the state-of-the-art approaches.


2019 ◽  
Vol 26 (1) ◽  
pp. 154-158 ◽  
Author(s):  
A. Venkata Subramanyam ◽  
Vanshika Gupta ◽  
Rahul Ahuja

2021 ◽  
Author(s):  
Guowan Shao ◽  
Chunjiang Peng ◽  
Wenchu Ou ◽  
Kai Duan

Dimensionality reduction plays an important role in the fields of pattern recognition and computer vision. Recursive discriminative subspace learning with an L1-norm distance constraint (RDSL) is proposed to robustly extract features from contaminated data and L1-norm and slack variables are utilized for accomplishing the goal. However, its performance may decline when too many outliers are available. Moreover, the method ignores the global structure of the data. In this paper, we propose cutting L1-norm distance discriminant analysis with sample reconstruction (C-L1-DDA) to solve the two problems. We apply cutting L1-norm to measure within-class and between-class distances and thus outliers may be strongly suppressed. Moreover, we use cutting squared L2-norm to measure reconstruction errors. In this way, outliers may be constrained and the global structure of data may be approximately preserved. Finally, we give an alternating iterative algorithm to extract feature vectors. Experimental results on two publicly available real databases verify the feasibility and effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document