discriminative subspace
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 17)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Guowan Shao ◽  
Chunjiang Peng ◽  
Wenchu Ou ◽  
Kai Duan

Linear discriminant analysis (LDA) is sensitive to noise and its performance may decline greatly. Recursive discriminative subspace learning method with an L1-norm distance constraint (RDSL) formulates LDA with the maximum margin criterion and becomes robust to noise by applying L1-norm and slack variables. However, the method only considers inter-class separation and intra-class compactness and ignores the intra-class manifold structure and the global structure of data. In this paper, we present L1-norm distance discriminant analysis with multiple adaptive graphs and sample reconstruction (L1-DDA) to deal with the problem. We use multiple adaptive graphs to preserve intra-class manifold structure and simultaneously apply the sample reconstruction technique to preserve the global structure of data. Moreover, we use an alternating iterative technique to obtain projection vectors. Experimental results on three real databases demonstrate that our method obtains better classification performance than RDSL.


2021 ◽  
Author(s):  
Guowan Shao ◽  
Chunjiang Peng ◽  
Wenchu Ou ◽  
Kai Duan

Dimensionality reduction plays an important role in the fields of pattern recognition and computer vision. Recursive discriminative subspace learning with an L1-norm distance constraint (RDSL) is proposed to robustly extract features from contaminated data and L1-norm and slack variables are utilized for accomplishing the goal. However, its performance may decline when too many outliers are available. Moreover, the method ignores the global structure of the data. In this paper, we propose cutting L1-norm distance discriminant analysis with sample reconstruction (C-L1-DDA) to solve the two problems. We apply cutting L1-norm to measure within-class and between-class distances and thus outliers may be strongly suppressed. Moreover, we use cutting squared L2-norm to measure reconstruction errors. In this way, outliers may be constrained and the global structure of data may be approximately preserved. Finally, we give an alternating iterative algorithm to extract feature vectors. Experimental results on two publicly available real databases verify the feasibility and effectiveness of the proposed method.


Author(s):  
Xiaobin Zhi ◽  
Tongjun Yu ◽  
Longtao Bi ◽  
Yalan Li

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Ao Li ◽  
Yu Ding ◽  
Xunjiang Zheng ◽  
Deyun Chen ◽  
Guanglu Sun ◽  
...  

Recently, cross-view feature learning has been a hot topic in machine learning due to the wide applications of multiview data. Nevertheless, the distribution discrepancy between cross-views leads to the fact that instances of the different views from same class are farther than those within the same view but from different classes. To address this problem, in this paper, we develop a novel cross-view discriminative feature subspace learning method inspired by layered visual perception from human. Firstly, the proposed method utilizes a separable low-rank self-representation model to disentangle the class and view structure layers, respectively. Secondly, a local alignment is constructed with two designed graphs to guide the subspace decomposition in a pairwise way. Finally, the global discriminative constraint on distribution center in each view is designed for further alignment improvement. Extensive cross-view classification experiments on several public datasets prove that our proposed method is more effective than other existing feature learning methods.


2020 ◽  
Vol 50 (5) ◽  
pp. 2138-2151 ◽  
Author(s):  
Dong Zhang ◽  
Yunlian Sun ◽  
Qiaolin Ye ◽  
Jinhui Tang

2020 ◽  
Author(s):  
Yipeng Zhang ◽  
Yiming Zhang ◽  
Bo Du ◽  
Chao Zhang ◽  
Xiaoyang Guo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document