Emotion Recognition With Facial Expression Using Machine Learning for Social Network and Healthcare

Author(s):  
Anju Yadav ◽  
Venkatesh Gauri Shankar ◽  
Vivek Kumar Verma

In this chapter, machine learning application on facial expression recognition (FER) is studied for seven emotional states (disgust, joy, surprise, anger, sadness, contempt, and fear) based on FER describing coefficient. FER has many practical importance in various area like social network, robotics, healthcare, etc. Further, a literature review of existing machine learning approaches for FER is discussed, and a novel approach for FER is given for static and dynamic images. Then the results are compared with the other existing approaches. The chapter also covers additional related issues of applications, various challenges, and opportunities in future FER. For security-based face detection systems that can identify an individual, in any form of expression he introduces himself. Doctors will use this system to find the intensity of illness or pain of a deaf and dumb patient. The proposed model is based on machine learning application with three types of prototypes, which are pre-trained model, single layer augmented model, and multi-layered augmented model, having a combined accuracy of approx. 99%.

2021 ◽  
Vol 25 (3) ◽  
pp. 1717-1730
Author(s):  
Esma Mansouri-Benssassi ◽  
Juan Ye

AbstractEmotion recognition through facial expression and non-verbal speech represents an important area in affective computing. They have been extensively studied from classical feature extraction techniques to more recent deep learning approaches. However, most of these approaches face two major challenges: (1) robustness—in the face of degradation such as noise, can a model still make correct predictions? and (2) cross-dataset generalisation—when a model is trained on one dataset, can it be used to make inference on another dataset?. To directly address these challenges, we first propose the application of a spiking neural network (SNN) in predicting emotional states based on facial expression and speech data, then investigate, and compare their accuracy when facing data degradation or unseen new input. We evaluate our approach on third-party, publicly available datasets and compare to the state-of-the-art techniques. Our approach demonstrates robustness to noise, where it achieves an accuracy of 56.2% for facial expression recognition (FER) compared to 22.64% and 14.10% for CNN and SVM, respectively, when input images are degraded with the noise intensity of 0.5, and the highest accuracy of 74.3% for speech emotion recognition (SER) compared to 21.95% of CNN and 14.75% for SVM when audio white noise is applied. For generalisation, our approach achieves consistently high accuracy of 89% for FER and 70% for SER in cross-dataset evaluation and suggests that it can learn more effective feature representations, which lead to good generalisation of facial features and vocal characteristics across subjects.


2021 ◽  
Author(s):  
Theresa Reiker ◽  
Monica Golumbeanu ◽  
Andrew Shattock ◽  
Lydia Burgert ◽  
Thomas A. Smith ◽  
...  

AbstractIndividual-based models have become important tools in the global battle against infectious diseases, yet model complexity can make calibration to biological and epidemiological data challenging. We propose a novel approach to calibrate disease transmission models via a Bayesian optimization framework employing machine learning emulator functions to guide a global search over a multi-objective landscape. We demonstrate our approach by application to an established individual-based model of malaria, optimizing over a high-dimensional parameter space with respect to a portfolio of multiple fitting objectives built from datasets capturing the natural history of malaria transmission and disease progression. Outperforming other calibration methodologies, the new approach quickly reaches an improved final goodness of fit. Per-objective parameter importance and sensitivity diagnostics provided by our approach offer epidemiological insights and enhance trust in predictions through greater interpretability.One Sentence SummaryWe propose a novel, fast, machine learning-based approach to calibrate disease transmission models that outperforms other methodologies


2020 ◽  
pp. 57-63
Author(s):  
admin admin ◽  
◽  
◽  
◽  
◽  
...  

The human facial emotions recognition has attracted interest in the field of Artificial Intelligence. The emotions on a human face depicts what’s going on inside the mind. Facial expression recognition is the part of Facial recognition which is gaining more importance and need for it increases tremendously. Though there are methods to identify expressions using machine learning and Artificial Intelligence techniques, this work attempts to use convolution neural networks to recognize expressions and classify the expressions into 6 emotions categories. Various datasets are investigated and explored for training expression recognition models are explained in this paper and the models which are used in this paper are VGG 19 and RESSNET 18. We included facial emotional recognition with gender identification also. In this project we have used fer2013 and ck+ dataset and ultimately achieved 73% and 94% around accuracies respectively.


Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1892
Author(s):  
Simone Porcu ◽  
Alessandro Floris ◽  
Luigi Atzori

Most Facial Expression Recognition (FER) systems rely on machine learning approaches that require large databases for an effective training. As these are not easily available, a good solution is to augment the databases with appropriate data augmentation (DA) techniques, which are typically based on either geometric transformation or oversampling augmentations (e.g., generative adversarial networks (GANs)). However, it is not always easy to understand which DA technique may be more convenient for FER systems because most state-of-the-art experiments use different settings which makes the impact of DA techniques not comparable. To advance in this respect, in this paper, we evaluate and compare the impact of using well-established DA techniques on the emotion recognition accuracy of a FER system based on the well-known VGG16 convolutional neural network (CNN). In particular, we consider both geometric transformations and GAN to increase the amount of training images. We performed cross-database evaluations: training with the "augmented" KDEF database and testing with two different databases (CK+ and ExpW). The best results were obtained combining horizontal reflection, translation and GAN, bringing an accuracy increase of approximately 30%. This outperforms alternative approaches, except for the one technique that could however rely on a quite bigger database.


Sign in / Sign up

Export Citation Format

Share Document