A comparison study of feature spaces and classification methods for facial expression recognition

Author(s):  
Chun Fui Liew ◽  
Takehisa Yairi
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yan Wang ◽  
Ming Li ◽  
Xing Wan ◽  
Congxuan Zhang ◽  
Yue Wang

Obtaining a valid facial expression recognition (FER) method is still a research hotspot in the artificial intelligence field. In this paper, we propose a multiparameter fusion feature space and decision voting-based classification for facial expression recognition. First, the parameter of the fusion feature space is determined according to the cross-validation recognition accuracy of the Multiscale Block Local Binary Pattern Uniform Histogram (MB-LBPUH) descriptor filtering over the training samples. According to the parameters, we build various fusion feature spaces by employing multiclass linear discriminant analysis (LDA). In these spaces, fusion features composed of MB-LBPUH and Histogram of Oriented Gradient (HOG) features are used to represent different facial expressions. Finally, to resolve the inconvenient classifiable pattern problem caused by similar expression classes, a nearest neighbor-based decision voting strategy is designed to predict the classification results. In experiments with the JAFFE, CK+, and TFEID datasets, the proposed model clearly outperformed existing algorithms.


Facial expression recognition (FER) is now getting extensively popular because of its ability to predict an unknown data-set, and to its extent with some accuracy. An average human being possesses or shows seven different expressions based on the situation, namely anger, sad, happy, surprise, disgust, neutral and scared. Each individual has a unique way to express the afore-mentioned expressions and hence the term “an unknown data-set”. To identify human’s present mindset through facial expressions, many data sets are prepared based on face components (such as lips, cheek, nose, eyes and eye brows etc.,) dislocations and elasticity of all the facial parts. Many facial recognition systems are functioning on muscle distribution analysis from the mother image set’s pixel parameters. This research paper is going to present about image pre processing, facial expression learning methods, classification methods and implementation of FaceEx algorithm for facial expression analysis through FER2013 CNN data sets and Viola-Jones Principle.


Sign in / Sign up

Export Citation Format

Share Document