Real-time facial expression recognition using smoothed deep neural network ensemble

2020 ◽  
Vol 28 (1) ◽  
pp. 97-111
Author(s):  
Nadir Kamel Benamara ◽  
Mikel Val-Calvo ◽  
Jose Ramón Álvarez-Sánchez ◽  
Alejandro Díaz-Morcillo ◽  
Jose Manuel Ferrández-Vicente ◽  
...  

Facial emotion recognition (FER) has been extensively researched over the past two decades due to its direct impact in the computer vision and affective robotics fields. However, the available datasets to train these models include often miss-labelled data due to the labellers bias that drives the model to learn incorrect features. In this paper, a facial emotion recognition system is proposed, addressing automatic face detection and facial expression recognition separately, the latter is performed by a set of only four deep convolutional neural network respect to an ensembling approach, while a label smoothing technique is applied to deal with the miss-labelled training data. The proposed system takes only 13.48 ms using a dedicated graphics processing unit (GPU) and 141.97 ms using a CPU to recognize facial emotions and reaches the current state-of-the-art performances regarding the challenging databases, FER2013, SFEW 2.0, and ExpW, giving recognition accuracies of 72.72%, 51.97%, and 71.82% respectively.

Author(s):  
Tegani Salem ◽  
Telli Abdelmoutia

Although the classification of images has become one of the most important challenges, neural networks have had the most success with this task; this has shifted the focus towards architecture-based engineering rather than feature engineering. However, the enormous success of the convolutional neural network (CNN) is still far from comparable to the human brain's performance. In this context, a new and promising algorithm called a capsule net that is based on dynamic routing and activity vectors between capsules appeared as an efficient technique to exceed the limitations of the artificial neural network (ANN), which is considered to be one of the most important existing classifiers. This paper presents a new method-based capsule network with light-gradient-boosting-machine (LightGBM) classifiers for facial emotion recognition. To achieve our aim, there were two steps to our technique. Initially, the capsule networks were merely employed for feature extraction. Then, using the outputs computed from the capsule networks, a LightGBM classifier was utilised to detect seven fundamental facial expressions. Experiments were carried out to evaluate the suggested facial-expression-recognition system's performance. The efficacy of our proposed method, which achieved an accuracy rate of 91%, was proven by its testing the results on the CK+ dataset. KEYWORDS Image classifications, LightGBM, machine learning, computer vision, CNN, deep learning


Author(s):  
Shahana A. ◽  
Harish Binu K. P.

The system introduces an intelligent facial emotion recognition using arti?cial neural network (ANN). The concept ?rst takes modi?ed local binary patterns, which involve horizontal vertical and neighborhood pixel comparison, to produce initial facial representation. Then, a microgenetic algorithm(mGA) embedded Particle Swarm Optimization(PSO), is proposed for feature optimization. It incorporates a nonreplaceable memory, a small-population secondary swarm, a new velocity updating strategy, a sub dimension based in-depth local facial feature search, and a cooperation of local exploitation and global exploration search mechanism to mitigate the premature convergence problem of conventional PSO. Arti?cial Neural Network is used as a classi?er for recognizing seven facial emotions. ANN is implemented as classi?er for pattern recognition. Based on a comprehensive study using within- and cross-domain images from the extended Japanese database. The empirical results indicate that our proposed system outperforms other state of-the-art PSO variants, conventional PSO, classical GA, and other related facial expression recognition models reported in the literature by a significant margin.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuta Takahashi ◽  
Shingo Murata ◽  
Hayato Idei ◽  
Hiroaki Tomita ◽  
Yuichi Yamashita

AbstractThe mechanism underlying the emergence of emotional categories from visual facial expression information during the developmental process is largely unknown. Therefore, this study proposes a system-level explanation for understanding the facial emotion recognition process and its alteration in autism spectrum disorder (ASD) from the perspective of predictive processing theory. Predictive processing for facial emotion recognition was implemented as a hierarchical recurrent neural network (RNN). The RNNs were trained to predict the dynamic changes of facial expression movies for six basic emotions without explicit emotion labels as a developmental learning process, and were evaluated by the performance of recognizing unseen facial expressions for the test phase. In addition, the causal relationship between the network characteristics assumed in ASD and ASD-like cognition was investigated. After the developmental learning process, emotional clusters emerged in the natural course of self-organization in higher-level neurons, even though emotional labels were not explicitly instructed. In addition, the network successfully recognized unseen test facial sequences by adjusting higher-level activity through the process of minimizing precision-weighted prediction error. In contrast, the network simulating altered intrinsic neural excitability demonstrated reduced generalization capability and impaired emotional clustering in higher-level neurons. Consistent with previous findings from human behavioral studies, an excessive precision estimation of noisy details underlies this ASD-like cognition. These results support the idea that impaired facial emotion recognition in ASD can be explained by altered predictive processing, and provide possible insight for investigating the neurophysiological basis of affective contact.


2021 ◽  
Author(s):  
Yuta Takahashi ◽  
Shingo Murata ◽  
Hayato Idei ◽  
Hiroaki Tomita ◽  
Yuichi Yamashita

BackgroundThe mechanism underlying the emergence of emotional categories from visual facial expression information during the developmental process is largely unknown. Therefore, this study proposes a system-level explanation for understanding the facial emotion recognition process and its alteration in autism spectrum disorder (ASD) from the perspective of predictive processing theory.MethodsPredictive processing for facial emotion recognition was implemented as a hierarchical recurrent neural network (RNN). The RNNs were trained to predict the dynamic changes of facial expression movies for six basic emotions without explicit emotion labels as a developmental learning process, and were evaluated by the performance of recognizing unseen facial expressions for the test phase. In addition, the causal relationship between the network characteristics assumed in ASD and ASD-like cognition was investigated.ResultsAfter the developmental learning process, emotional categories emerged in the natural course of self-organization in higher-level neurons, even though emotional labels were not explicitly instructed. In addition, the network successfully recognized unseen test facial sequences by adjusting higher-level activity through the process of minimizing precision-weighted prediction error. In contrast, the network simulating altered intrinsic neural excitability demonstrated reduced generalization capability and impaired emotional categorization in higher-level neurons. Consistent with previous findings from human behavioral studies, an excessive precision estimation of noisy details underlies this ASD-like cognition.ConclusionsThese results support the idea that impaired facial emotion recognition in ASD can be explained by altered predictive processing, and provide possible insight for investigating the neurophysiological basis of affective contact.


Sign in / Sign up

Export Citation Format

Share Document