Traditional versus Neural Network Classification Methods for Facial Emotion Recognition

2021 ◽  
Vol 7 (2) ◽  
pp. 203-206
Author(s):  
Herag Arabian ◽  
Verena Wagner-Hartl ◽  
Knut Moeller

Abstract Facial emotion recognition (FER) is a topic that has gained interest over the years for its role in bridging the gap between Human and Machine interactions. This study explores the potential of real time FER modelling, to be integrated in a closed loop system, to help in treatment of children suffering from Autism Spectrum Disorder (ASD). The aim of this study is to show the differences between implementing Traditional machine learning and Deep learning approaches for FER modelling. Two classification approaches were taken, the first approach was based on classic machine learning techniques using Histogram of Oriented Gradients (HOG) for feature extraction, with a k-Nearest Neighbor and a Support Vector Machine model as classifiers. The second approach uses Transfer Learning based on the popular “Alex Net” Neural Network architecture. The performance of the approaches was based on the accuracy of randomly selected validation sets after training on random training sets of the Oulu-CASIA database. The data analyzed shows that traditional machine learning methods are as effective as deep neural net models and are a good compromise between accuracy, extracted features, computational speed and costs.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuta Takahashi ◽  
Shingo Murata ◽  
Hayato Idei ◽  
Hiroaki Tomita ◽  
Yuichi Yamashita

AbstractThe mechanism underlying the emergence of emotional categories from visual facial expression information during the developmental process is largely unknown. Therefore, this study proposes a system-level explanation for understanding the facial emotion recognition process and its alteration in autism spectrum disorder (ASD) from the perspective of predictive processing theory. Predictive processing for facial emotion recognition was implemented as a hierarchical recurrent neural network (RNN). The RNNs were trained to predict the dynamic changes of facial expression movies for six basic emotions without explicit emotion labels as a developmental learning process, and were evaluated by the performance of recognizing unseen facial expressions for the test phase. In addition, the causal relationship between the network characteristics assumed in ASD and ASD-like cognition was investigated. After the developmental learning process, emotional clusters emerged in the natural course of self-organization in higher-level neurons, even though emotional labels were not explicitly instructed. In addition, the network successfully recognized unseen test facial sequences by adjusting higher-level activity through the process of minimizing precision-weighted prediction error. In contrast, the network simulating altered intrinsic neural excitability demonstrated reduced generalization capability and impaired emotional clustering in higher-level neurons. Consistent with previous findings from human behavioral studies, an excessive precision estimation of noisy details underlies this ASD-like cognition. These results support the idea that impaired facial emotion recognition in ASD can be explained by altered predictive processing, and provide possible insight for investigating the neurophysiological basis of affective contact.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Nima Farhoumandi ◽  
Sadegh Mollaey ◽  
Soomaayeh Heysieattalab ◽  
Mostafa Zarean ◽  
Reza Eyvazpour

Objective. Alexithymia, as a fundamental notion in the diagnosis of psychiatric disorders, is characterized by deficits in emotional processing and, consequently, difficulties in emotion recognition. Traditional tools for assessing alexithymia, which include interviews and self-report measures, have led to inconsistent results due to some limitations as insufficient insight. Therefore, the purpose of the present study was to propose a new screening tool that utilizes machine learning models based on the scores of facial emotion recognition task. Method. In a cross-sectional study, 55 students of the University of Tabriz were selected based on the inclusion and exclusion criteria and their scores in the Toronto Alexithymia Scale (TAS-20). Then, they completed the somatization subscale of Symptom Checklist-90 Revised (SCL-90-R), Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II), and the facial emotion recognition (FER) task. Afterwards, support vector machine (SVM) and feedforward neural network (FNN) classifiers were implemented using K-fold cross validation to predict alexithymia, and the model performance was assessed with the area under the curve (AUC), accuracy, sensitivity, specificity, and F1-measure. Results. The models yielded an accuracy range of 72.7–81.8% after feature selection and optimization. Our results suggested that ML models were able to accurately distinguish alexithymia and determine the most informative items for predicting alexithymia. Conclusion. Our results show that machine learning models using FER task, SCL-90-R, BDI-II, and BAI could successfully diagnose alexithymia and also represent the most influential factors of predicting it and can be used as a clinical instrument to help clinicians in diagnosis process and earlier detection of the disorder.


2021 ◽  
Author(s):  
Yuta Takahashi ◽  
Shingo Murata ◽  
Hayato Idei ◽  
Hiroaki Tomita ◽  
Yuichi Yamashita

BackgroundThe mechanism underlying the emergence of emotional categories from visual facial expression information during the developmental process is largely unknown. Therefore, this study proposes a system-level explanation for understanding the facial emotion recognition process and its alteration in autism spectrum disorder (ASD) from the perspective of predictive processing theory.MethodsPredictive processing for facial emotion recognition was implemented as a hierarchical recurrent neural network (RNN). The RNNs were trained to predict the dynamic changes of facial expression movies for six basic emotions without explicit emotion labels as a developmental learning process, and were evaluated by the performance of recognizing unseen facial expressions for the test phase. In addition, the causal relationship between the network characteristics assumed in ASD and ASD-like cognition was investigated.ResultsAfter the developmental learning process, emotional categories emerged in the natural course of self-organization in higher-level neurons, even though emotional labels were not explicitly instructed. In addition, the network successfully recognized unseen test facial sequences by adjusting higher-level activity through the process of minimizing precision-weighted prediction error. In contrast, the network simulating altered intrinsic neural excitability demonstrated reduced generalization capability and impaired emotional categorization in higher-level neurons. Consistent with previous findings from human behavioral studies, an excessive precision estimation of noisy details underlies this ASD-like cognition.ConclusionsThese results support the idea that impaired facial emotion recognition in ASD can be explained by altered predictive processing, and provide possible insight for investigating the neurophysiological basis of affective contact.


2021 ◽  
Vol 1827 (1) ◽  
pp. 012130
Author(s):  
Qi Li ◽  
Yun Qing Liu ◽  
Yue Qi Peng ◽  
Cong Liu ◽  
Jun Shi ◽  
...  

2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


2021 ◽  
Vol 12 ◽  
Author(s):  
Paula J. Webster ◽  
Shuo Wang ◽  
Xin Li

Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.


Sign in / Sign up

Export Citation Format

Share Document