Image Appearance-Based Facial Expression Recognition

2018 ◽  
Vol 18 (02) ◽  
pp. 1850012 ◽  
Author(s):  
Zhaoqi Wu ◽  
Reziwanguli Xiamixiding ◽  
Atul Sajjanhar ◽  
Juan Chen ◽  
Quan Wen

We investigate facial expression recognition (FER) based on image appearance. FER is performed using state-of-the-art classification approaches. Different approaches to preprocess face images are investigated. First, region-of-interest (ROI) images are obtained by extracting the facial ROI from raw images. FER of ROI images is used as the benchmark and compared with the FER of difference images. Difference images are obtained by computing the difference between the ROI images of neutral and peak facial expressions. FER is also evaluated for images which are obtained by applying the Local binary pattern (LBP) operator to ROI images. Further, we investigate different contrast enhancement operators to preprocess images, namely, histogram equalization (HE) approach and a brightness preserving approach for histogram equalization. The classification experiments are performed for a convolutional neural network (CNN) and a pre-trained deep learning model. All experiments are performed on three public face databases, namely, Cohn–Kanade (CK[Formula: see text]), JAFFE and FACES.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


Optik ◽  
2016 ◽  
Vol 127 (15) ◽  
pp. 6195-6203 ◽  
Author(s):  
Sajid Ali Khan ◽  
Ayyaz Hussain ◽  
Muhammad Usman

2021 ◽  
Vol 12 ◽  
Author(s):  
Ma Ruihua ◽  
Guo Hua ◽  
Zhao Meng ◽  
Chen Nan ◽  
Liu Panqi ◽  
...  

Objective: Considerable evidence has shown that facial expression recognition ability and cognitive function are impaired in patients with depression. We aimed to investigate the relationship between facial expression recognition and cognitive function in patients with depression.Methods: A total of 51 participants (i.e., 31 patients with depression and 20 healthy control subjects) underwent facial expression recognition tests, measuring anger, fear, disgust, sadness, happiness, and surprise. The Chinese version of the MATRICS Consensus Cognitive Battery (MCCB), which assesses seven cognitive domains, was used.Results: When compared with a control group, there were differences in the recognition of the expressions of sadness (p = 0.036), happiness (p = 0.041), and disgust (p = 0.030) in a depression group. In terms of cognitive function, the scores of patients with depression in the Trail Making Test (TMT; p < 0.001), symbol coding (p < 0.001), spatial span (p < 0.001), mazes (p = 0.007), the Brief Visuospatial Memory Test (BVMT; p = 0.001), category fluency (p = 0.029), and continuous performance test (p = 0.001) were lower than those of the control group, and the difference was statistically significant. The accuracy of sadness and disgust expression recognition in patients with depression was significantly positively correlated with cognitive function scores. The deficits in sadness expression recognition were significantly correlated with the TMT (p = 0.001, r = 0.561), symbol coding (p = 0.001, r = 0.596), maze (p = 0.015, r = 0.439), and the BVMT (p = 0.044, r = 0.370). The deficits in disgust expression recognition were significantly correlated with impairments in the TMT (p = 0.005, r = 0.501) and symbol coding (p = 0.001, r = 0.560).Conclusion: Since cognitive function is impaired in patients with depression, the ability to recognize negative facial expressions declines, which is mainly reflected in processing speed, reasoning, problem-solving, and memory.


2017 ◽  
Vol 2 (2) ◽  
pp. 130-134
Author(s):  
Jarot Dwi Prasetyo ◽  
Zaehol Fatah ◽  
Taufik Saleh

In recent years it appears interest in the interaction between humans and computers. Facial expressions play a fundamental role in social interaction with other humans. In two human communications is only 7% of communication due to language linguistic message, 38% due to paralanguage, while 55% through facial expressions. Therefore, to facilitate human machine interface more friendly on multimedia products, the facial expression recognition on interface very helpful in interacting comfort. One of the steps that affect the facial expression recognition is the accuracy in facial feature extraction. Several approaches to facial expression recognition in its extraction does not consider the dimensions of the data as input features of machine learning Through this research proposes a wavelet algorithm used to reduce the dimension of data features. Data features are then classified using SVM-multiclass machine learning to determine the difference of six facial expressions are anger, hatred, fear of happy, sad, and surprised Jaffe found in the database. Generating classification obtained 81.42% of the 208 sample data.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

AbstractMethods using salient facial patches (SFPs) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and they do not consider head position variations. We contend that SFP can be an effective approach for recognizing facial expressions under different head rotations. Accordingly, we propose an algorithm, called profile salient facial patches (PSFP), to achieve this objective. First, to detect facial landmarks and estimate head poses from profile face images, a tree-structured part model is used for pose-free landmark localization. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks while avoiding their overlap or the transcending of the actual face range. To analyze the PSFP recognition performance, three classical approaches for local feature extraction, specifically the histogram of oriented gradients (HOG), local binary pattern, and Gabor, were applied to extract profile facial expression features. Experimental results on the Radboud Faces Database show that PSFP with HOG features can achieve higher accuracies under most head rotations.


2020 ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

Abstract Methods using salient facial patches (SFP) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and do not consider variations of head position. In our view, SFP can also be a good choice to recognize facial expression under different head rotations, and thus we propose an algorithm for this purpose, called Profile Salient Facial Patches (PSFP). First, in order to detect the facial landmarks from profile face images, the tree-structured part model is used for pose-free landmark localization; this approach excels at detecting facial landmarks and estimating head poses. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks, while avoiding overlap with each other or going beyond the range of the actual face. For the purpose of analyzing the recognition performance of PSFP, three classical approaches for local feature extraction-histogram of oriented Gradients (HOG), local binary pattern (LBP), and Gabor were applied to extract profile facial expression features. Experimental results on radboud faces database show that PSFP with HOG features can achieve higher accuracies under the most head rotations.


2020 ◽  
Vol 14 (7) ◽  
pp. 1373-1381 ◽  
Author(s):  
Saranya Rajan ◽  
Poongodi Chenniappan ◽  
Somasundaram Devaraj ◽  
Nirmala Madian

Sign in / Sign up

Export Citation Format

Share Document