scholarly journals MIFAD-Net: Multi-Layer Interactive Feature Fusion Network With Angular Distance Loss for Face Emotion Recognition

2021 ◽  
Vol 12 ◽  
Author(s):  
Weiwei Cai ◽  
Ming Gao ◽  
Runmin Liu ◽  
Jie Mao

Understanding human emotions and psychology is a critical step toward realizing artificial intelligence, and correct recognition of facial expressions is essential for judging emotions. However, the differences caused by changes in facial expression are very subtle, and different expression features are less distinguishable, making it difficult for computers to recognize human facial emotions accurately. Therefore, this paper proposes a novel multi-layer interactive feature fusion network model with angular distance loss. To begin, a multi-layer and multi-scale module is designed to extract global and local features of facial emotions in order to capture part of the feature relationships between different scales, thereby improving the model's ability to discriminate subtle features of facial emotions. Second, a hierarchical interactive feature fusion module is designed to address the issue of loss of useful feature information caused by layer-by-layer convolution and pooling of convolutional neural networks. In addition, the attention mechanism is also used between convolutional layers at different levels. Improve the neural network's discriminative ability by increasing the saliency of information about different features on the layers and suppressing irrelevant information. Finally, we use the angular distance loss function to improve the proposed model's inter-class feature separation and intra-class feature clustering capabilities, addressing the issues of large intra-class differences and high inter-class similarity in facial emotion recognition. We conducted comparison and ablation experiments on the FER2013 dataset. The results illustrate that the performance of the proposed MIFAD-Net is 1.02–4.53% better than the compared methods, and it has strong competitiveness.

2014 ◽  
Vol 20 (10) ◽  
pp. 1004-1014 ◽  
Author(s):  
Cinzia Cecchetto ◽  
Marilena Aiello ◽  
Delia D’Amico ◽  
Daniela Cutuli ◽  
Daniela Cargnelutti ◽  
...  

AbstractMultiple sclerosis (MS) may be associated with impaired perception of facial emotions. However, emotion recognition mediated by bodily postures has never been examined in these patients. Moreover, several studies have suggested a relation between emotion recognition impairments and alexithymia. This is in line with the idea that the ability to recognize emotions requires the individuals to be able to understand their own emotions. Despite a deficit in emotion recognition has been observed in MS patients, the association between impaired emotion recognition and alexithymia has received little attention. The aim of this study was, first, to investigate MS patient’s abilities to recognize emotions mediated by both facial and bodily expressions and, second, to examine whether any observed deficits in emotions recognition could be explained by the presence of alexithymia. Thirty patients with MS and 30 healthy matched controls performed experimental tasks assessing emotion discrimination and recognition of facial expressions and bodily postures. Moreover, they completed questionnaires evaluating alexithymia, depression, and fatigue. First, facial emotion recognition and, to a lesser extent, bodily emotion recognition can be impaired in MS patients. In particular, patients with higher disability showed an impairment in emotion recognition compared with patients with lower disability and controls. Second, their deficit in emotion recognition was not predicted by alexithymia. Instead, the disease’s characteristics and the performance on some cognitive tasks significantly correlated with emotion recognition. Impaired facial emotion recognition is a cognitive signature of MS that is not dependent on alexithymia. (JINS, 2014, 19, 1–11)


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2021 ◽  
Vol 11 (22) ◽  
pp. 10540
Author(s):  
Navjot Rathour ◽  
Zeba Khanam ◽  
Anita Gehlot ◽  
Rajesh Singh ◽  
Mamoon Rashid ◽  
...  

There is a significant interest in facial emotion recognition in the fields of human–computer interaction and social sciences. With the advancements in artificial intelligence (AI), the field of human behavioral prediction and analysis, especially human emotion, has evolved significantly. The most standard methods of emotion recognition are currently being used in models deployed in remote servers. We believe the reduction in the distance between the input device and the server model can lead us to better efficiency and effectiveness in real life applications. For the same purpose, computational methodologies such as edge computing can be beneficial. It can also encourage time-critical applications that can be implemented in sensitive fields. In this study, we propose a Raspberry-Pi based standalone edge device that can detect real-time facial emotions. Although this edge device can be used in variety of applications where human facial emotions play an important role, this article is mainly crafted using a dataset of employees working in organizations. A Raspberry-Pi-based standalone edge device has been implemented using the Mini-Xception Deep Network because of its computational efficiency in a shorter time compared to other networks. This device has achieved 100% accuracy for detecting faces in real time with 68% accuracy, i.e., higher than the accuracy mentioned in the state-of-the-art with the FER 2013 dataset. Future work will implement a deep network on Raspberry-Pi with an Intel Movidious neural compute stick to reduce the processing time and achieve quick real time implementation of the facial emotion recognition system.


2014 ◽  
Vol 519-520 ◽  
pp. 644-650
Author(s):  
Mian Shui Yu ◽  
Yu Xie ◽  
Xiao Meng Xie

Age classification based on facial images is attracting wide attention with its broad application to human-computer interaction (HCI). Since human senescence is a tremendously complex process, age classification is still a highly challenging issue. In our study, Local Directional Pattern (LDP) and Gabor wavelet transform were used to extract global and local facial features, respectively, that were fused based on information fusion theory. The Principal Component Analysis (PCA) method was used for dimensionality reduction of the fused features, to obtain a lower-dimensional age characteristic vector. A Support Vector Machine (SVM) multi-class classifier with Error Correcting Output Codes (ECOC) was proposed in the paper. This was aimed at multi-class classification problems, such as age classification. Experiments on a public FG-NET age database proved the efficiency of our method.


Sign in / Sign up

Export Citation Format

Share Document