scholarly journals Facial Emotion Recognition Based on Biorthogonal Wavelet Entropy, Fuzzy Support Vector Machine, and Stratified Cross Validation

IEEE Access ◽  
2016 ◽  
Vol 4 ◽  
pp. 8375-8385 ◽  
Author(s):  
Yu-Dong Zhang ◽  
Zhang-Jing Yang ◽  
Hui-Min Lu ◽  
Xing-Xing Zhou ◽  
Preetha Phillips ◽  
...  
2021 ◽  
Vol 7 (2) ◽  
pp. 203-206
Author(s):  
Herag Arabian ◽  
Verena Wagner-Hartl ◽  
Knut Moeller

Abstract Facial emotion recognition (FER) is a topic that has gained interest over the years for its role in bridging the gap between Human and Machine interactions. This study explores the potential of real time FER modelling, to be integrated in a closed loop system, to help in treatment of children suffering from Autism Spectrum Disorder (ASD). The aim of this study is to show the differences between implementing Traditional machine learning and Deep learning approaches for FER modelling. Two classification approaches were taken, the first approach was based on classic machine learning techniques using Histogram of Oriented Gradients (HOG) for feature extraction, with a k-Nearest Neighbor and a Support Vector Machine model as classifiers. The second approach uses Transfer Learning based on the popular “Alex Net” Neural Network architecture. The performance of the approaches was based on the accuracy of randomly selected validation sets after training on random training sets of the Oulu-CASIA database. The data analyzed shows that traditional machine learning methods are as effective as deep neural net models and are a good compromise between accuracy, extracted features, computational speed and costs.


Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1897 ◽  
Author(s):  
Dhwani Mehta ◽  
Mohammad Faridul Haque Siddiqui ◽  
Ahmad Y. Javaid

Over the past two decades, automatic facial emotion recognition has received enormous attention. This is due to the increase in the need for behavioral biometric systems and human–machine interaction where the facial emotion recognition and the intensity of emotion play vital roles. The existing works usually do not encode the intensity of the observed facial emotion and even less involve modeling the multi-class facial behavior data jointly. Our work involves recognizing the emotion along with the respective intensities of those emotions. The algorithms used in this comparative study are Gabor filters, a Histogram of Oriented Gradients (HOG), and Local Binary Pattern (LBP) for feature extraction. For classification, we have used Support Vector Machine (SVM), Random Forest (RF), and Nearest Neighbor Algorithm (kNN). This attains emotion recognition and intensity estimation of each recognized emotion. This is a comparative study of classifiers used for facial emotion recognition along with the intensity estimation of those emotions for databases. The results verified that the comparative study could be further used in real-time behavioral facial emotion and intensity of emotion recognition.


2018 ◽  
Vol 272 ◽  
pp. 668-676 ◽  
Author(s):  
Shui-Hua Wang ◽  
Preetha Phillips ◽  
Zheng-Chao Dong ◽  
Yu-Dong Zhang

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Nima Farhoumandi ◽  
Sadegh Mollaey ◽  
Soomaayeh Heysieattalab ◽  
Mostafa Zarean ◽  
Reza Eyvazpour

Objective. Alexithymia, as a fundamental notion in the diagnosis of psychiatric disorders, is characterized by deficits in emotional processing and, consequently, difficulties in emotion recognition. Traditional tools for assessing alexithymia, which include interviews and self-report measures, have led to inconsistent results due to some limitations as insufficient insight. Therefore, the purpose of the present study was to propose a new screening tool that utilizes machine learning models based on the scores of facial emotion recognition task. Method. In a cross-sectional study, 55 students of the University of Tabriz were selected based on the inclusion and exclusion criteria and their scores in the Toronto Alexithymia Scale (TAS-20). Then, they completed the somatization subscale of Symptom Checklist-90 Revised (SCL-90-R), Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II), and the facial emotion recognition (FER) task. Afterwards, support vector machine (SVM) and feedforward neural network (FNN) classifiers were implemented using K-fold cross validation to predict alexithymia, and the model performance was assessed with the area under the curve (AUC), accuracy, sensitivity, specificity, and F1-measure. Results. The models yielded an accuracy range of 72.7–81.8% after feature selection and optimization. Our results suggested that ML models were able to accurately distinguish alexithymia and determine the most informative items for predicting alexithymia. Conclusion. Our results show that machine learning models using FER task, SCL-90-R, BDI-II, and BAI could successfully diagnose alexithymia and also represent the most influential factors of predicting it and can be used as a clinical instrument to help clinicians in diagnosis process and earlier detection of the disorder.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yan Ding ◽  
Xuemei Chen ◽  
Shan Zhong ◽  
Li Liu

With the rapid development of society, the number of college students in our country is on the rise. College students are under pressure due to challenges from the society, school, and family, but they cannot find a suitable solution. As a result, the psychological problems of college students are diversified and complicated. The mental health problem of college students is becoming more and more serious, which requires urgent attention. This article realizes the monitoring of university mental health by identifying and analyzing the emotions of college students. This article uses EEG to determine the emotional state of college students. First, feature extraction is performed on different rhythm data of EEG, and then a fuzzy support vector machine (FSVM) is used for classification. Finally, a decision fusion mechanism based on the D-S evidence combination theory is used to fuse the classification results and output the final emotion recognition results. The contribution of this research is mainly in three aspects. One is the use of multiple features, which improves the efficiency of data use; the other is the use of a fuzzy support vector machine classifier with higher noise resistance, and the recognition rate of the model is better. The third is that the decision fusion mechanism based on the D-S evidence combination theory takes into account the classification results of each feature, and the classification results assist each other and integrate organically. The experiment compares emotion recognition based on single rhythm, multirhythm combination, and multirhythm fusion. The experimental results fully prove that the proposed emotion recognition method can effectively improve the recognition efficiency. It has a good practical value in the emotion recognition of college students.


2009 ◽  
Vol 2009 ◽  
pp. 1-16 ◽  
Author(s):  
Yong Yang ◽  
Guoyin Wang ◽  
Hao Kong

Emotion recognition is very important for human-computer intelligent interaction. It is generally performed on facial or audio information by artificial neural network, fuzzy set, support vector machine, hidden Markov model, and so forth. Although some progress has already been made in emotion recognition, several unsolved issues still exist. For example, it is still an open problem which features are the most important for emotion recognition. It is a subject that was seldom studied in computer science. However, related research works have been conducted in cognitive psychology. In this paper, feature selection for facial emotion recognition is studied based on rough set theory. A self-learning attribute reduction algorithm is proposed based on rough set and domain oriented data-driven data mining theory. Experimental results show that important and useful features for emotion recognition can be identified by the proposed method with a high recognition rate. It is found that the features concerning mouth are the most important ones in geometrical features for facial emotion recognition.


Sign in / Sign up

Export Citation Format

Share Document