scholarly journals A hybrid deep learning neural approach for emotion recognition from facial expressions for socially assistive robots

2018 ◽  
Vol 29 (7) ◽  
pp. 359-373 ◽  
Author(s):  
Ariel Ruiz-Garcia ◽  
Mark Elshaw ◽  
Abdulrahman Altahhan ◽  
Vasile Palade

Facial emotions are the changes in facial expressions about a person’s inner excited tempers, objectives, or social exchanges which are scrutinized with the aid of computer structures that attempt to subsequently inspect and identify the facial feature and movement variations from visual data. Facial emotion recognition (FER) is a noteworthy area in the arena of computer vision and artificial intelligence due to its significant commercial and academic potential. FER has become a widespread concept of deep learning and offers more fields for application in our day-to-day life. Facial expression recognition (FER) has gathered widespread consideration recently as facial expressions are thought of as the fastest medium for communicating any of any sort of information. Recognizing facial expressions provides an improved understanding of a person’s thoughts or views. With the latest improvement in computer vision and machine learning, it is plausible to identify emotions from images. Analyzing them with the presently emerging deep learning methods enhance the accuracy rate tremendously as compared to the traditional contemporary systems. This paper emphases the review of a few of the machine learning, deep learning, and transfer learning techniques used by several researchers that flagged the means to advance the classification accurateness of the FEM.


Author(s):  
Ritvik Tiwari ◽  
Rudra Thorat ◽  
Vatsal Abhani ◽  
Shakti Mahapatro

Emotion recognition based on facial expression is an intriguing research field, which has been presented and applied in various spheres such as safety, health and in human machine interfaces. Researchers in this field are keen in developing techniques that can prove to be an aid to interpret, decode facial expressions and then extract these features in order to achieve a better prediction by the computer. With advancements in deep learning, the different types of prospects of this technique are exploited to achieve a better performance. We spotlight these contributions, the architecture and the databases used and present the progress made by comparing the proposed methods and the results obtained. The interest of this paper is to guide the technology enthusiasts by reviewing recent works and providing insights to make improvements to this field.


Classroom teaching assessments are intended to give valuable advice on the teaching-learning process as it happens. The finest schoolroom assessments furthermore assist as substantial foundations of information for teachers, serving them to recognize what they imparted fittingly and how they can improve their lecture content to keep the students attentive. In this paper, we have surveyed some of the recent paper works done on facial emotion recognition of students in a classroom arrangement and have proposed our deep learning approach to analyze emotions with improved emotion classification results and offers an optimized feedback to the instructor. A deep learning-based convolution neural network algorithm will be used in this paper to train FER2013 facial emotion images database and use transfer learning technique to pre-train the VGG16 architecture-based model with Cohn-Kanade (CK+) facial image database, with its own weights and basis. A trained model will capture the live steaming of students by using a high-resolution digital video camera that faces towards the students, capturing their live emotions through facial expressions, and classifying the emotions as sad, happy, neutral, angry, disgust, surprise, and fear, that can offer us an insight into the class group emotion that is reflective of the mood among the students in the classroom. This experimental approach can be used for video conferences, online classes etc. This proposition can improve the accuracy of emotion recognition and facilitate faster learning. We have presented the research methodologies and the achieved results on student emotions in a classroom atmosphere and have proposed an improved CNN model based on transfer learning that can suggestively improve the emotions classification accuracy.


MENDEL ◽  
2018 ◽  
Vol 24 (1) ◽  
pp. 113-120 ◽  
Author(s):  
Luis Antonio Beltrán Prieto ◽  
Zuzana Kominkova Oplatkova

Emotions demonstrate people's reactions to certain stimuli. Facial expression analysis is often used to identify the emotion expressed. Machine learning algorithms combined with artificial intelligence techniques have been developed in order to detect expressions found in multimedia elements, including videos and pictures. Advanced methods to achieve this include the usage of Deep Learning algorithms. The aim of this paper is to analyze the performance of a Convolutional Neural Network which uses AutoEncoder Units for emotion-recognition in human faces. The combination of two Deep Learning techniques boosts the performance of the classification system. 8000 facial expressions from the Radboud Faces Database were used during this research for both training and testing. The outcome showed that five of the eight analyzed emotions presented higher accuracy rates, higher than 90%.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5328
Author(s):  
Clarence Tan ◽  
Gerardo Ceballos ◽  
Nikola Kasabov ◽  
Narayan Puthanmadam Subramaniyam

Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.


Author(s):  
Bandana M. Pal ◽  
Khushbu S. Tikhe ◽  
Akshay Pagaonkar ◽  
Pooja Jadhav

Emotion Recognition is an important area of work to improve the interaction between human and machine. Complexity of emotion makes the acquisition task more difficult. Quondam works are proposed to capture emotion through unimodal mechanism such as only facial expressions or only vocal input. More recently, inception to the idea of multimodal emotion recognition has increased the accuracy rate of the detection of the machine. Moreover, deep learning technique with neural network extended the success ratio of machine in respect of emotion recognition. Recent works with deep learning technique has been performed with different kinds of input of human behavior such as audio-visual inputs, facial expressions, body gestures, EEG signal and related brainwaves. Still many aspects in this area to work on to improve and make a robust system will detect and classify emotions more accurately. In this paper, we tried to explore the relevant significant works, their techniques, and the effectiveness of the methods and the scope of the improvement of the results.


Author(s):  
Mircea Zloteanu ◽  
Eva G. Krumhuber ◽  
Daniel C. Richardson

AbstractPeople are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoder’s ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed.


Sign in / Sign up

Export Citation Format

Share Document