Recognition of Emotional Facial Expressions: The Role of Facial and Contextual Information in the Accuracy of Recognition

2012 ◽  
Vol 110 (1) ◽  
pp. 338-350 ◽  
Author(s):  
Mariano Chóliz ◽  
Enrique G. Fernández-Abascal

Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.

2019 ◽  
Vol 8 (2S11) ◽  
pp. 1076-1079

Automated facial expression recognition can greatly improve the human–machine interface. Many deep learning approaches have been applied in recent years due to their outstanding recognition accuracy after training with large amounts of data. In this research, we enhanced Convolutional Neural Network method to recognize 6 basic emotions and compared some pre processing methods to show the influences of its in CNN performance. The preprocessing methods are :resizing, mean, normalization, standard deviation, scaling and edge detection . Face detection as single pre-processing phase achieved significant result with 100 % of accuracy, compared with another pre-processing phase and raw data.


2007 ◽  
Vol 21 (2) ◽  
pp. 100-108 ◽  
Author(s):  
Michela Balconi ◽  
Claudio Lucchiari

Abstract. In this study we analyze whether facial expression recognition is marked by specific event-related potential (ERP) correlates and whether conscious and unconscious elaboration of emotional facial stimuli are qualitatively different processes. ERPs elicited by supraliminal and subliminal (10 ms) stimuli were recorded when subjects were viewing emotional facial expressions of four emotions or neutral stimuli. Two ERP effects (N2 and P3) were analyzed in terms of their peak amplitude and latency variations. An emotional specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). Unaware information processing proved to be quite similar to aware processing in terms of peak morphology but not of latency. A major result of this research was that unconscious stimulation produced a more delayed peak variation than conscious stimulation did. Also, a more posterior distribution of the ERP was found for N2 as a function of emotional content of the stimulus. On the contrary, cortical lateralization (right/left) was not correlated to conscious/unconscious stimulation. The functional significance of our results is underlined in terms of subliminal effect and emotion recognition.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Gilles Vannuscorps ◽  
Michael Andres ◽  
Alfonso Caramazza

What mechanisms underlie facial expression recognition? A popular hypothesis holds that efficient facial expression recognition cannot be achieved by visual analysis alone but additionally requires a mechanism of motor simulation — an unconscious, covert imitation of the observed facial postures and movements. Here, we first discuss why this hypothesis does not necessarily follow from extant empirical evidence. Next, we report experimental evidence against the central premise of this view: we demonstrate that individuals can achieve normotypical efficient facial expression recognition despite a congenital absence of relevant facial motor representations and, therefore, unaided by motor simulation. This underscores the need to reconsider the role of motor simulation in facial expression recognition.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Shota Uono ◽  
Wataru Sato ◽  
Reiko Sawada ◽  
Sayaka Kawakami ◽  
Sayaka Yoshimura ◽  
...  

People with schizophrenia or subclinical schizotypal traits exhibit impaired recognition of facial expressions. However, it remains unclear whether the detection of emotional facial expressions is impaired in people with schizophrenia or high levels of schizotypy. The present study examined whether the detection of emotional facial expressions would be associated with schizotypy in a non-clinical population after controlling for the effects of IQ, age, and sex. Participants were asked to respond to whether all faces were the same as quickly and as accurately as possible following the presentation of angry or happy faces or their anti-expressions among crowds of neutral faces. Anti-expressions contain a degree of visual change that is equivalent to that of normal emotional facial expressions relative to neutral facial expressions and are recognized as neutral expressions. Normal expressions of anger and happiness were detected more rapidly and accurately than their anti-expressions. Additionally, the degree of overall schizotypy was negatively correlated with the effectiveness of detecting normal expressions versus anti-expressions. An emotion–recognition task revealed that the degree of positive schizotypy was negatively correlated with the accuracy of facial expression recognition. These results suggest that people with high levels of schizotypy experienced difficulties detecting and recognizing emotional facial expressions.


2020 ◽  
Vol 10 (11) ◽  
pp. 4002
Author(s):  
Sathya Bursic ◽  
Giuseppe Boccignone ◽  
Alfio Ferrara ◽  
Alessandro D’Amelio ◽  
Raffaella Lanzarotti

When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affective expressions the speech articulation process influences facial configurations. In this work we question whether, aside from facial features, other cues relating to the articulation process would increase emotion recognition accuracy when added in input to a deep neural network model. We develop two neural networks that classify facial expressions in speaking subjects from the RAVDESS dataset, a spatio-temporal CNN and a GRU cell RNN. They are first trained on facial features only, and afterwards both on facial features and articulation related cues extracted from a model trained for lip reading, while varying the number of consecutive frames provided in input as well. We show that using DNNs the addition of features related to articulation increases classification accuracy up to 12%, the increase being greater with more consecutive frames provided in input to the model.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Qing Lin ◽  
Ruili He ◽  
Peihe Jiang

State-of-the-art facial expression methods outperform human beings, especially, thanks to the success of convolutional neural networks (CNNs). However, most of the existing works focus mainly on analyzing an adult’s face and ignore the important problems: how can we recognize facial expression from a baby’s face image and how difficult is it? In this paper, we first introduce a new face image database, named BabyExp, which contains 12,000 images from babies younger than two years old, and each image is with one of three facial expressions (i.e., happy, sad, and normal). To the best of our knowledge, the proposed dataset is the first baby face dataset for analyzing a baby’s face image, which is complementary to the existing adult face datasets and can shed some light on exploring baby face analysis. We also propose a feature guided CNN method with a new loss function, called distance loss, to optimize interclass distance. In order to facilitate further research, we provide the benchmark of expression recognition on the BabyExp dataset. Experimental results show that the proposed network achieves the recognition accuracy of 87.90% on BabyExp.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


Sign in / Sign up

Export Citation Format

Share Document