scholarly journals Deep Learning based Student Emotion Recognition from Facial Expressions in Classrooms

Classroom teaching assessments are intended to give valuable advice on the teaching-learning process as it happens. The finest schoolroom assessments furthermore assist as substantial foundations of information for teachers, serving them to recognize what they imparted fittingly and how they can improve their lecture content to keep the students attentive. In this paper, we have surveyed some of the recent paper works done on facial emotion recognition of students in a classroom arrangement and have proposed our deep learning approach to analyze emotions with improved emotion classification results and offers an optimized feedback to the instructor. A deep learning-based convolution neural network algorithm will be used in this paper to train FER2013 facial emotion images database and use transfer learning technique to pre-train the VGG16 architecture-based model with Cohn-Kanade (CK+) facial image database, with its own weights and basis. A trained model will capture the live steaming of students by using a high-resolution digital video camera that faces towards the students, capturing their live emotions through facial expressions, and classifying the emotions as sad, happy, neutral, angry, disgust, surprise, and fear, that can offer us an insight into the class group emotion that is reflective of the mood among the students in the classroom. This experimental approach can be used for video conferences, online classes etc. This proposition can improve the accuracy of emotion recognition and facilitate faster learning. We have presented the research methodologies and the achieved results on student emotions in a classroom atmosphere and have proposed an improved CNN model based on transfer learning that can suggestively improve the emotions classification accuracy.

2021 ◽  
pp. 1-12
Author(s):  
Mukul Kumar ◽  
Nipun Katyal ◽  
Nersisson Ruban ◽  
Elena Lyakso ◽  
A. Mary Mekala ◽  
...  

Over the years the need for differentiating various emotions from oral communication plays an important role in emotion based studies. There have been different algorithms to classify the kinds of emotion. Although there is no measure of fidelity of the emotion under consideration, which is primarily due to the reason that most of the readily available datasets that are annotated are produced by actors and not generated in real-world scenarios. Therefore, the predicted emotion lacks an important aspect called authenticity, which is whether an emotion is actual or stimulated. In this research work, we have developed a transfer learning and style transfer based hybrid convolutional neural network algorithm to classify the emotion as well as the fidelity of the emotion. The model is trained on features extracted from a dataset that contains stimulated as well as actual utterances. We have compared the developed algorithm with conventional machine learning and deep learning techniques by few metrics like accuracy, Precision, Recall and F1 score. The developed model performs much better than the conventional machine learning and deep learning models. The research aims to dive deeper into human emotion and make a model that understands it like humans do with precision, recall, F1 score values of 0.994, 0.996, 0.995 for speech authenticity and 0.992, 0.989, 0.99 for speech emotion classification respectively.


Facial emotions are the changes in facial expressions about a person’s inner excited tempers, objectives, or social exchanges which are scrutinized with the aid of computer structures that attempt to subsequently inspect and identify the facial feature and movement variations from visual data. Facial emotion recognition (FER) is a noteworthy area in the arena of computer vision and artificial intelligence due to its significant commercial and academic potential. FER has become a widespread concept of deep learning and offers more fields for application in our day-to-day life. Facial expression recognition (FER) has gathered widespread consideration recently as facial expressions are thought of as the fastest medium for communicating any of any sort of information. Recognizing facial expressions provides an improved understanding of a person’s thoughts or views. With the latest improvement in computer vision and machine learning, it is plausible to identify emotions from images. Analyzing them with the presently emerging deep learning methods enhance the accuracy rate tremendously as compared to the traditional contemporary systems. This paper emphases the review of a few of the machine learning, deep learning, and transfer learning techniques used by several researchers that flagged the means to advance the classification accurateness of the FEM.


2021 ◽  
pp. 1-10
Author(s):  
Daniel T. Burley ◽  
Christopher W. Hobson ◽  
Dolapo Adegboye ◽  
Katherine H. Shelton ◽  
Stephanie H.M. van Goozen

Abstract Impaired facial emotion recognition is a transdiagnostic risk factor for a range of psychiatric disorders. Childhood behavioral difficulties and parental emotional environment have been independently associated with impaired emotion recognition; however, no study has examined the contribution of these factors in conjunction. We measured recognition of negative (sad, fear, anger), neutral, and happy facial expressions in 135 children aged 5–7 years referred by their teachers for behavioral problems. Parental emotional environment was assessed for parental expressed emotion (EE) – characterized by negative comments, reduced positive comments, low warmth, and negativity towards their child – using the 5-minute speech sample. Child behavioral problems were measured using the teacher-informant Strengths and Difficulties Questionnaire (SDQ). Child behavioral problems and parental EE were independently associated with impaired recognition of negative facial expressions specifically. An interactive effect revealed that the combination of both factors was associated with the greatest risk for impaired recognition of negative faces, and in particular sad facial expressions. No relationships emerged for the identification of happy facial expressions. This study furthers our understanding of multidimensional processes associated with the development of facial emotion recognition and supports the importance of early interventions that target this domain.


i-Perception ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 204166952110095
Author(s):  
Elmeri Syrjänen ◽  
Håkan Fischer ◽  
Marco Tullio Liuzza ◽  
Torun Lindholm ◽  
Jonas K. Olofsson

How do valenced odors affect the perception and evaluation of facial expressions? We reviewed 25 studies published from 1989 to 2020 on cross-modal behavioral effects of odors on the perception of faces. The results indicate that odors may influence facial evaluations and classifications in several ways. Faces are rated as more arousing during simultaneous odor exposure, and the rated valence of faces is affected in the direction of the odor valence. For facial classification tasks, in general, valenced odors, whether pleasant or unpleasant, decrease facial emotion classification speed. The evidence for valence congruency effects was inconsistent. Some studies found that exposure to a valenced odor facilitates the processing of a similarly valenced facial expression. The results for facial evaluation were mirrored in classical conditioning studies, as faces conditioned with valenced odors were rated in the direction of the odor valence. However, the evidence of odor effects was inconsistent when the task was to classify faces. Furthermore, using a z-curve analysis, we found clear evidence for publication bias. Our recommendations for future research include greater consideration of individual differences in sensation and cognition, individual differences (e.g., differences in odor sensitivity related to age, gender, or culture), establishing standardized experimental assessments and stimuli, larger study samples, and embracing open research practices.


2017 ◽  
Vol 29 (5) ◽  
pp. 1749-1761 ◽  
Author(s):  
Johanna Bick ◽  
Rhiannon Luyster ◽  
Nathan A. Fox ◽  
Charles H. Zeanah ◽  
Charles A. Nelson

AbstractWe examined facial emotion recognition in 12-year-olds in a longitudinally followed sample of children with and without exposure to early life psychosocial deprivation (institutional care). Half of the institutionally reared children were randomized into foster care homes during the first years of life. Facial emotion recognition was examined in a behavioral task using morphed images. This same task had been administered when children were 8 years old. Neutral facial expressions were morphed with happy, sad, angry, and fearful emotional facial expressions, and children were asked to identify the emotion of each face, which varied in intensity. Consistent with our previous report, we show that some areas of emotion processing, involving the recognition of happy and fearful faces, are affected by early deprivation, whereas other areas, involving the recognition of sad and angry faces, appear to be unaffected. We also show that early intervention can have a lasting positive impact, normalizing developmental trajectories of processing negative emotions (fear) into the late childhood/preadolescent period.


Author(s):  
Ajeet Ram Pathak ◽  
Somesh Bhalsing ◽  
Shivani Desai ◽  
Monica Gandhi ◽  
Pranathi Patwardhan

2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


2021 ◽  
Vol 12 ◽  
Author(s):  
Paula J. Webster ◽  
Shuo Wang ◽  
Xin Li

Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.


Sign in / Sign up

Export Citation Format

Share Document