scholarly journals A hybrid feature descriptor with Jaya optimised least squares SVM for facial expression recognition

2021 ◽  
Author(s):  
Nikunja Bihari Kar ◽  
Deepak Ranjan Nayak ◽  
Korra Sathya Babu ◽  
Yu‐Dong Zhang



Facial expression based emotion recognition is one of the popular research domains in the computer vision field. Many machine vision-based feature extraction methods are available to increase the accuracy of the Facial Expression Recognition (FER). In feature extraction, neighboring pixel values are manipulated in different ways to encode the texture information of muscle movements. However, defining the robust feature descriptor is still a challenging task to handle the external factors. This paper introduces the Merged Local Neighborhood Difference Pattern (MLNDP) to encode and merge the two-level of representation. At the first level, each pixel is encoded with respect to center pixel, and at the second level, encoding is carried out based on the relationship with the closest neighboring pixel. Finally, two levels of encodings are logically merged to retain only the texture that is positively encoded from the two levels. Further, the feature dimension is reduced using chi-square statistical test, and the final classification is carried out using multiclass SVM on two datasets namely, CK+ and MMI. The proposed descriptor compared against other local descriptors such as LDP, LTP, LDN, and LGP. Experimental results show that our proposed feature descriptor is outperformed other descriptors with 97.86% on CK+ dataset and 95.29% on MMI dataset. The classifier comparison confirms the results that the combination of MLNDP with multiclass SVM performs better than other combinations in terms of local descriptor and classifier.



2018 ◽  
Vol 28 (2) ◽  
pp. 399-409 ◽  
Author(s):  
Faisal Ahmed ◽  
Md. Hasanul Kabir

Abstract In recent years, research in automated facial expression recognition has attained significant attention for its potential applicability in human-computer interaction, surveillance systems, animation, and consumer electronics. However, recognition in uncontrolled environments under the presence of illumination and pose variations, low-resolution video, occlusion, and random noise is still a challenging research problem. In this paper, we investigate recognition of facial expression in difficult conditions by means of an effective facial feature descriptor, namely the directional ternary pattern (DTP). Given a face image, the DTP operator describes the facial feature by quantizing the eight-directional edge response values, capturing essential texture properties, such as presence of edges, corners, points, lines, etc. We also present an enhancement of the basic DTP encoding method, namely the compressed DTP (cDTP) that can describe the local texture more effectively with fewer features. The recognition performances of the proposed DTP and cDTP descriptors are evaluated using the Cohn-Kanade (CK) and the Japanese female facial expression (JAFFE) database. In our experiments, we simulate difficult conditions using original database images with lighting variations, low-resolution images obtained by down-sampling the original, and images corrupted with Gaussian noise. In all cases, the proposed method outperforms some of the well-known face feature descriptors.



Information ◽  
2014 ◽  
Vol 5 (2) ◽  
pp. 305-318 ◽  
Author(s):  
Ying Chen ◽  
Shiqing Zhang ◽  
Xiaoming Zhao


Sign in / Sign up

Export Citation Format

Share Document