Lightweight Multi-Scale Network with Attention for Facial Expression Recognition

Author(s):  
Zhibin Hu ◽  
Chunman Yan
2014 ◽  
Vol 511-512 ◽  
pp. 437-440
Author(s):  
Xiao Xiao Xia ◽  
Zi Lu Ying ◽  
Wen Jin Chu

A new method based on Monogenic Binary Coding (MBC) is proposed for facial expression feature extraction and representation. Firstly, monogenic signal analysis is used to extract multi-scale magnitude, orientation and phase components. Secondly, Monogenic Binary Coding (MBC) is used to encode the monogenic local variation and intensity in local regions of each extracted component in each scale and local histograms are built. Then Blocked Fisher Linear Discrimination (BFLD) is used to reduce the dimensionality of histogram features and to enhance discrimination. Finally the three complementary components are fused for more effective facial expression recognition (FER). Experiment results on Japanese female expression database (JAFFE) show that the performance of the fusion method is even better than state-of-the-art local feature based FER methods such as Local Binary Pattern (LBP)+Sparse Representation (SRC), Local Phase Quantization (LPQ)+SRC ,etc.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5391
Author(s):  
Suraiya Yasmin ◽  
Refat Khan Pathan ◽  
Munmun Biswas ◽  
Mayeen Uddin Khandaker ◽  
Mohammad Rashed Iqbal Faruque

Compelling facial expression recognition (FER) processes have been utilized in very successful fields like computer vision, robotics, artificial intelligence, and dynamic texture recognition. However, the FER’s critical problem with traditional local binary pattern (LBP) is the loss of neighboring pixels related to different scales that can affect the texture of facial images. To overcome such limitations, this study describes a new extended LBP method to extract feature vectors from images, detecting each image from facial expressions. The proposed method is based on the bitwise AND operation of two rotational kernels applied on LBP(8,1) and LBP(8,2) and utilizes two accessible datasets. Firstly, the facial parts are detected and the essential components of a face are observed, such as eyes, nose, and lips. The portion of the face is then cropped to reduce the dimensions and an unsharp masking kernel is applied to sharpen the image. The filtered images then go through the feature extraction method and wait for the classification process. Four machine learning classifiers were used to verify the proposed method. This study shows that the proposed multi-scale featured local binary pattern (MSFLBP), together with Support Vector Machine (SVM), outperformed the recent LBP-based state-of-the-art approaches resulting in an accuracy of 99.12% for the Extended Cohn–Kanade (CK+) dataset and 89.08% for the Karolinska Directed Emotional Faces (KDEF) dataset.


2021 ◽  
Vol 21 (4) ◽  
pp. 1-18
Author(s):  
Zhihan Lv ◽  
Liang Qiao ◽  
Qingjun Wang

Emotional cognitive ability is a key technical indicator to measure the friendliness of interaction. Therefore, this research aims to explore robots with human emotion cognitively. By discussing the prospects of 5G technology and cognitive robots, the main direction of the study is cognitive robots. For the emotional cognitive robots, the analysis logic similar to humans is difficult to imitate; the information processing levels of robots are divided into three levels in this study: cognitive algorithm, feature extraction, and information collection by comparing human information processing levels. In addition, a multi-scale rectangular direction gradient histogram is used for facial expression recognition, and robust principal component analysis algorithm is used for facial expression recognition. In the pictures where humans intuitively feel smiles in sad emotions, the proportion of emotions obtained by the method in this study are as follows: calmness accounted for 0%, sadness accounted for 15.78%, fear accounted for 0%, happiness accounted for 76.53%, disgust accounted for 7.69%, anger accounted for 0%, and astonishment accounted for 0%. In the recognition of micro-expressions, humans intuitively feel negative emotions such as surprise and fear, and the proportion of emotions obtained by the method adopted in this study are as follows: calmness accounted for 32.34%, sadness accounted for 34.07%, fear accounted for 6.79%, happiness accounted for 0%, disgust accounted for 0%, anger accounted for 13.91%, and astonishment accounted for 15.89%. Therefore, the algorithm explored in this study can realize accuracy in cognition of emotions. From the preceding research results, it can be seen that the research method in this study can intuitively reflect the proportion of human expressions, and the recognition methods based on facial expressions and micro-expressions have good recognition effects, which is in line with human intuitive experience.


Sign in / Sign up

Export Citation Format

Share Document