Ensemble of Machine Learning Models for an Improved Facial Emotion Recognition

Author(s):  
Sergio Pulido-Castro ◽  
Nubia Palacios-Quecan ◽  
Michelle P. Ballen-Cardenas ◽  
Sandra Cancino-Suarez ◽  
Alejandra Rizo-Arevalo ◽  
...  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Nima Farhoumandi ◽  
Sadegh Mollaey ◽  
Soomaayeh Heysieattalab ◽  
Mostafa Zarean ◽  
Reza Eyvazpour

Objective. Alexithymia, as a fundamental notion in the diagnosis of psychiatric disorders, is characterized by deficits in emotional processing and, consequently, difficulties in emotion recognition. Traditional tools for assessing alexithymia, which include interviews and self-report measures, have led to inconsistent results due to some limitations as insufficient insight. Therefore, the purpose of the present study was to propose a new screening tool that utilizes machine learning models based on the scores of facial emotion recognition task. Method. In a cross-sectional study, 55 students of the University of Tabriz were selected based on the inclusion and exclusion criteria and their scores in the Toronto Alexithymia Scale (TAS-20). Then, they completed the somatization subscale of Symptom Checklist-90 Revised (SCL-90-R), Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II), and the facial emotion recognition (FER) task. Afterwards, support vector machine (SVM) and feedforward neural network (FNN) classifiers were implemented using K-fold cross validation to predict alexithymia, and the model performance was assessed with the area under the curve (AUC), accuracy, sensitivity, specificity, and F1-measure. Results. The models yielded an accuracy range of 72.7–81.8% after feature selection and optimization. Our results suggested that ML models were able to accurately distinguish alexithymia and determine the most informative items for predicting alexithymia. Conclusion. Our results show that machine learning models using FER task, SCL-90-R, BDI-II, and BAI could successfully diagnose alexithymia and also represent the most influential factors of predicting it and can be used as a clinical instrument to help clinicians in diagnosis process and earlier detection of the disorder.


2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


2021 ◽  
Vol 7 (2) ◽  
pp. 203-206
Author(s):  
Herag Arabian ◽  
Verena Wagner-Hartl ◽  
Knut Moeller

Abstract Facial emotion recognition (FER) is a topic that has gained interest over the years for its role in bridging the gap between Human and Machine interactions. This study explores the potential of real time FER modelling, to be integrated in a closed loop system, to help in treatment of children suffering from Autism Spectrum Disorder (ASD). The aim of this study is to show the differences between implementing Traditional machine learning and Deep learning approaches for FER modelling. Two classification approaches were taken, the first approach was based on classic machine learning techniques using Histogram of Oriented Gradients (HOG) for feature extraction, with a k-Nearest Neighbor and a Support Vector Machine model as classifiers. The second approach uses Transfer Learning based on the popular “Alex Net” Neural Network architecture. The performance of the approaches was based on the accuracy of randomly selected validation sets after training on random training sets of the Oulu-CASIA database. The data analyzed shows that traditional machine learning methods are as effective as deep neural net models and are a good compromise between accuracy, extracted features, computational speed and costs.


Author(s):  
Jingying Wang ◽  
Baobin Li ◽  
Changye Zhu ◽  
Shun Li ◽  
Tingshao Zhu

Automatic emotion recognition was of great value in many applications; however, to fully display the application value of emotion recognition, more portable, non-intrusive, inexpensive technologies need to be developed. Except face expression and voices, human gaits could reflect the walker's emotional state too. By utilizing 59 participants' gaits data with emotion labels, the authors train machine learning models that are able to “sense” individual emotion. Experimental results show these models work very well and prove that gait features are effective in characterizing and recognizing emotions.


2007 ◽  
Vol 16 (06) ◽  
pp. 1001-1014 ◽  
Author(s):  
PANAGIOTIS ZERVAS ◽  
IOSIF MPORAS ◽  
NIKOS FAKOTAKIS ◽  
GEORGE KOKKINAKIS

This paper presents and discusses the problem of emotion recognition from speech signals with the utilization of features bearing intonational information. In particular parameters extracted from Fujisaki's model of intonation are presented and evaluated. Machine learning models were build with the utilization of C4.5 decision tree inducer, instance based learner and Bayesian learning. The datasets utilized for the purpose of training machine learning models were extracted from two emotional databases of acted speech. Experimental results showed the effectiveness of Fujisaki's model attributes since they enhanced the recognition process for most of the emotion categories and learning approaches helping to the segregation of emotion categories.


2016 ◽  
Vol 139 (11) ◽  
pp. 16-19 ◽  
Author(s):  
Rituparna Halder ◽  
Sushmit Sengupta ◽  
Arnab Pal ◽  
Sudipta Ghosh ◽  
Debashish Kundu

Sign in / Sign up

Export Citation Format

Share Document