Deep subspace learning for expression recognition driven by a two-phase representation classifier

2019 ◽  
Vol 14 (3) ◽  
pp. 437-444
Author(s):  
Zhe Sun ◽  
Raymond Chiong ◽  
Zhengping Hu ◽  
Shufang Li
2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Ziqiang Wang ◽  
Xia Sun ◽  
Lijun Sun ◽  
Yuchun Huang

In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP) for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.


2020 ◽  
Vol E103.D (10) ◽  
pp. 2241-2245
Author(s):  
Dongliang CHEN ◽  
Peng SONG ◽  
Wenjing ZHANG ◽  
Weijian ZHANG ◽  
Bingui XU ◽  
...  

2018 ◽  
Vol 27 (08) ◽  
pp. 1850121 ◽  
Author(s):  
Zhe Sun ◽  
Zheng-Ping Hu ◽  
Raymond Chiong ◽  
Meng Wang ◽  
Wei He

Recent research has demonstrated the effectiveness of deep subspace learning networks, including the principal component analysis network (PCANet) and linear discriminant analysis network (LDANet), since they can extract high-level features and better represent abstract semantics of given data. However, their representation does not consider the nonlinear relationship of data and limits the use of features with nonlinear metrics. In this paper, we propose a novel architecture combining the kernel collaboration representation with deep subspace learning based on the PCANet and LDANet for facial expression recognition. First, the PCANet and LDANet are employed to learn abstract features. These features are then mapped to the kernel space to effectively capture their nonlinear similarities. Finally, we develop a simple yet effective classification method with squared [Formula: see text]-regularization, which improves the recognition accuracy and reduces time complexity. Comprehensive experimental results based on the JAFFE, CK[Formula: see text], KDEF and CMU Multi-PIE datasets confirm that our proposed approach has superior performance not just in terms of accuracy, but it is also robust against block occlusion and varying parameter configurations.


Author(s):  
Jianping Gou ◽  
Yongzhao Zhan ◽  
Xiangjun Shen ◽  
Qirong Mao ◽  
Liangjun Wang

Sign in / Sign up

Export Citation Format

Share Document