Biased subspace learning for misalignment-robust facial expression recognition

2016 ◽  
Vol 208 ◽  
pp. 202-209 ◽  
Author(s):  
Haibin Yan
2020 ◽  
Vol E103.D (10) ◽  
pp. 2241-2245
Author(s):  
Dongliang CHEN ◽  
Peng SONG ◽  
Wenjing ZHANG ◽  
Weijian ZHANG ◽  
Bingui XU ◽  
...  

2018 ◽  
Vol 27 (08) ◽  
pp. 1850121 ◽  
Author(s):  
Zhe Sun ◽  
Zheng-Ping Hu ◽  
Raymond Chiong ◽  
Meng Wang ◽  
Wei He

Recent research has demonstrated the effectiveness of deep subspace learning networks, including the principal component analysis network (PCANet) and linear discriminant analysis network (LDANet), since they can extract high-level features and better represent abstract semantics of given data. However, their representation does not consider the nonlinear relationship of data and limits the use of features with nonlinear metrics. In this paper, we propose a novel architecture combining the kernel collaboration representation with deep subspace learning based on the PCANet and LDANet for facial expression recognition. First, the PCANet and LDANet are employed to learn abstract features. These features are then mapped to the kernel space to effectively capture their nonlinear similarities. Finally, we develop a simple yet effective classification method with squared [Formula: see text]-regularization, which improves the recognition accuracy and reduces time complexity. Comprehensive experimental results based on the JAFFE, CK[Formula: see text], KDEF and CMU Multi-PIE datasets confirm that our proposed approach has superior performance not just in terms of accuracy, but it is also robust against block occlusion and varying parameter configurations.


Author(s):  
Cigdem Turan ◽  
Rui Zhao ◽  
Kin-Man Lam ◽  
Xiangjian He

For image recognition, an extensive number of subspace-learning methods have been proposed to overcome the high-dimensionality problem of the features being used. In this paper, we first give an overview of the most popular and state-of-the-art subspace-learning methods, and then, a novel manifold-learning method, named soft locality preserving map (SLPM), is presented. SLPM aims to control the level of spread of the different classes, which is closely connected to the generalizability of the learned subspace. We also do an overview of the extension of manifold learning methods to deep learning by formulating the loss functions for training, and further reformulate SLPM into a soft locality preserving (SLP) loss. These loss functions are applied as an additional regularization to the learning of deep neural networks. We evaluate these subspace-learning methods, as well as their deep-learning extensions, on facial expression recognition. Experiments on four commonly used databases show that SLPM effectively reduces the dimensionality of the feature vectors and enhances the discriminative power of the extracted features. Moreover, experimental results also demonstrate that the learned deep features regularized by SLP acquire a better discriminability and generalizability for facial expression recognition.


Sign in / Sign up

Export Citation Format

Share Document