A Deep Learning Facial Emotion Classification system: a VGGNet-19 based approach

Author(s):  
Nessrine Abbassi ◽  
Rabie Helaly ◽  
Mohamed Ali Hajjaji ◽  
Abdellatif Mtibaa

Classroom teaching assessments are intended to give valuable advice on the teaching-learning process as it happens. The finest schoolroom assessments furthermore assist as substantial foundations of information for teachers, serving them to recognize what they imparted fittingly and how they can improve their lecture content to keep the students attentive. In this paper, we have surveyed some of the recent paper works done on facial emotion recognition of students in a classroom arrangement and have proposed our deep learning approach to analyze emotions with improved emotion classification results and offers an optimized feedback to the instructor. A deep learning-based convolution neural network algorithm will be used in this paper to train FER2013 facial emotion images database and use transfer learning technique to pre-train the VGG16 architecture-based model with Cohn-Kanade (CK+) facial image database, with its own weights and basis. A trained model will capture the live steaming of students by using a high-resolution digital video camera that faces towards the students, capturing their live emotions through facial expressions, and classifying the emotions as sad, happy, neutral, angry, disgust, surprise, and fear, that can offer us an insight into the class group emotion that is reflective of the mood among the students in the classroom. This experimental approach can be used for video conferences, online classes etc. This proposition can improve the accuracy of emotion recognition and facilitate faster learning. We have presented the research methodologies and the achieved results on student emotions in a classroom atmosphere and have proposed an improved CNN model based on transfer learning that can suggestively improve the emotions classification accuracy.


Agronomy ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1551
Author(s):  
Tamoor Khan ◽  
Jiangtao Qiu ◽  
Hafiz Husnain Raza Sherazi ◽  
Mubashir Ali ◽  
Sukumar Letchmunan ◽  
...  

Agricultural advancements have significantly impacted people’s lives and their surroundings in recent years. The insufficient knowledge of the whole agricultural production system and conventional ways of irrigation have limited agricultural yields in the past. The remote sensing innovations recently implemented in agriculture have dramatically revolutionized production efficiency by offering unparalleled opportunities for convenient, versatile, and quick collection of land images to collect critical details on the crop’s conditions. These innovations have enabled automated data collection, simulation, and interpretation based on crop analytics facilitated by deep learning techniques. This paper aims to reveal the transformative patterns of old Chinese agrarian development and fruit production by focusing on the major crop production (from 1980 to 2050) taking into account various forms of data from fruit production (e.g., apples, bananas, citrus fruits, pears, and grapes). In this study, we used production data for different fruits grown in China to predict the future production of these fruits. The study employs deep neural networks to project future fruit production based on the statistics issued by China’s National Bureau of Statistics on the total fruit growth output for this period. The proposed method exhibits encouraging results with an accuracy of 95.56% calculating by accuracy formula based on fruit production variation. Authors further provide recommendations on the AGR-DL (agricultural deep learning) method being helpful for developing countries. The results suggest that the agricultural development in China is acceptable but demands more improvement and government needs to prioritize expanding the fruit production by establishing new strategies for cultivators to boost their performance.


2021 ◽  
pp. 1-12
Author(s):  
Mukul Kumar ◽  
Nipun Katyal ◽  
Nersisson Ruban ◽  
Elena Lyakso ◽  
A. Mary Mekala ◽  
...  

Over the years the need for differentiating various emotions from oral communication plays an important role in emotion based studies. There have been different algorithms to classify the kinds of emotion. Although there is no measure of fidelity of the emotion under consideration, which is primarily due to the reason that most of the readily available datasets that are annotated are produced by actors and not generated in real-world scenarios. Therefore, the predicted emotion lacks an important aspect called authenticity, which is whether an emotion is actual or stimulated. In this research work, we have developed a transfer learning and style transfer based hybrid convolutional neural network algorithm to classify the emotion as well as the fidelity of the emotion. The model is trained on features extracted from a dataset that contains stimulated as well as actual utterances. We have compared the developed algorithm with conventional machine learning and deep learning techniques by few metrics like accuracy, Precision, Recall and F1 score. The developed model performs much better than the conventional machine learning and deep learning models. The research aims to dive deeper into human emotion and make a model that understands it like humans do with precision, recall, F1 score values of 0.994, 0.996, 0.995 for speech authenticity and 0.992, 0.989, 0.99 for speech emotion classification respectively.


2019 ◽  
Vol 9 (11) ◽  
pp. 326 ◽  
Author(s):  
Hong Zeng ◽  
Zhenhua Wu ◽  
Jiaming Zhang ◽  
Chen Yang ◽  
Hua Zhang ◽  
...  

Deep learning (DL) methods have been used increasingly widely, such as in the fields of speech and image recognition. However, how to design an appropriate DL model to accurately and efficiently classify electroencephalogram (EEG) signals is still a challenge, mainly because EEG signals are characterized by significant differences between two different subjects or vary over time within a single subject, non-stability, strong randomness, low signal-to-noise ratio. SincNet is an efficient classifier for speaker recognition, but it has some drawbacks in dealing with EEG signals classification. In this paper, we improve and propose a SincNet-based classifier, SincNet-R, which consists of three convolutional layers, and three deep neural network (DNN) layers. We then make use of SincNet-R to test the classification accuracy and robustness by emotional EEG signals. The comparable results with original SincNet model and other traditional classifiers such as CNN, LSTM and SVM, show that our proposed SincNet-R model has higher classification accuracy and better algorithm robustness.


i-Perception ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 204166952110095
Author(s):  
Elmeri Syrjänen ◽  
Håkan Fischer ◽  
Marco Tullio Liuzza ◽  
Torun Lindholm ◽  
Jonas K. Olofsson

How do valenced odors affect the perception and evaluation of facial expressions? We reviewed 25 studies published from 1989 to 2020 on cross-modal behavioral effects of odors on the perception of faces. The results indicate that odors may influence facial evaluations and classifications in several ways. Faces are rated as more arousing during simultaneous odor exposure, and the rated valence of faces is affected in the direction of the odor valence. For facial classification tasks, in general, valenced odors, whether pleasant or unpleasant, decrease facial emotion classification speed. The evidence for valence congruency effects was inconsistent. Some studies found that exposure to a valenced odor facilitates the processing of a similarly valenced facial expression. The results for facial evaluation were mirrored in classical conditioning studies, as faces conditioned with valenced odors were rated in the direction of the odor valence. However, the evidence of odor effects was inconsistent when the task was to classify faces. Furthermore, using a z-curve analysis, we found clear evidence for publication bias. Our recommendations for future research include greater consideration of individual differences in sensation and cognition, individual differences (e.g., differences in odor sensitivity related to age, gender, or culture), establishing standardized experimental assessments and stimuli, larger study samples, and embracing open research practices.


Sign in / Sign up

Export Citation Format

Share Document