scholarly journals Ensemble of Multi Feature Layers in CNN for Facial Expression Recognition using Deep Learning

2019 ◽  
Vol 8 (4) ◽  
pp. 9782-9787

Facial Expression Recognition is an important undertaking for the machinery to recognize different expressive alterations in individual. Emotions have a strong relationship with our behavior. Human emotions are discrete reactions to inside or outside occasions which have some importance meaning. Involuntary sentiment detection is a process to understand the individual’s expressive state to identify his intensions from facial expression which is also a noteworthy piece of non-verbal correspondence. In this paper we propose a Framework that combines discriminative features discovered using Convolutional Neural Networks (CNN) to enhance the performance and accuracy of Facial Expression Recognition. For this we have implemented Inception V3 pre-trained architecture of CNN and then applying concatenation of intermediate layer with final layer which is further passing through fully connected layer to perform classification. We have used JAFFE (Japanese Female Facial Expression) Dataset for this purpose and Experimental results show that our proposed method shows better performance and improve the recognition accuracy.

Facial Expression Recognition is an important undertaking for the machinery to recognize different expressive alterations in individual. Emotions have a strong relationship with our behavior. Human emotions are discrete reactions to inside or outside occasions which have some importance meaning. Involuntary sentiment detection is a process to understand the individual’s expressive state to identify his intensions from facial expression which is also a noteworthy piece of non-verbal correspondence. There are seven essential emotions which incorporate Cheerful, Gloomy, Anger, Terror, Astonish, Hatred as well as Unbiased. In the present period of HumanComputer Interaction (HCI), making machines to analyze and recognize emotions is a difficult task. Recent FER systems are lacking of sufficient training data and other problems like illumination and head pose to identify emotions. Inside this article, we provide a comprehensive learning of Facial expression detection with Deep Learning methods which includes different Neural Network Algorithms used with different datasets and its efficiency result. Also we will provide current challenges and current opportunities in this field to develop robust FER using Deep learning.


2021 ◽  
Vol 14 (2) ◽  
pp. 127-135
Author(s):  
Fadhil Yusuf Rahadika ◽  
Novanto Yudistira ◽  
Yuita Arum Sari

During the COVID-19 pandemic, many offline activities are turned into online activities via video meetings to prevent the spread of the COVID 19 virus. In the online video meeting, some micro-interactions are missing when compared to direct social interactions. The use of machines to assist facial expression recognition in online video meetings is expected to increase understanding of the interactions among users. Many studies have shown that CNN-based neural networks are quite effective and accurate in image classification. In this study, some open facial expression datasets were used to train CNN-based neural networks with a total number of training data of 342,497 images. This study gets the best results using ResNet-50 architecture with Mish activation function and Accuracy Booster Plus block. This architecture is trained using the Ranger and Gradient Centralization optimization method for 60000 steps with a batch size of 256. The best results from the training result in accuracy of AffectNet validation data of 0.5972, FERPlus validation data of 0.8636, FERPlus test data of 0.8488, and RAF-DB test data of 0.8879. From this study, the proposed method outperformed plain ResNet in all test scenarios without transfer learning, and there is a potential for better performance with the pre-training model. The code is available at https://github.com/yusufrahadika-facial-expressions-essay.


2021 ◽  
Vol 9 (5) ◽  
pp. 1141-1152
Author(s):  
Muazu Abdulwakil Auma ◽  
Eric Manzi ◽  
Jibril Aminu

Facial recognition is integral and essential in todays society, and the recognition of emotions based on facial expressions is already becoming more usual. This paper analytically provides an overview of the databases of video data of facial expressions and several approaches to recognizing emotions by facial expressions by including the three main image analysis stages, which are pre-processing, feature extraction, and classification. The paper presents approaches based on deep learning using deep neural networks and traditional means to recognizing human emotions based on visual facial features. The current results of some existing algorithms are presented. When reviewing scientific and technical literature, the focus was mainly on sources containing theoretical and research information of the methods under consideration and comparing traditional techniques and methods based on deep neural networks supported by experimental research. An analysis of scientific and technical literature describing methods and algorithms for analyzing and recognizing facial expressions and world scientific research results has shown that traditional methods of classifying facial expressions are inferior in speed and accuracy to artificial neural networks. This reviews main contributions provide a general understanding of modern approaches to facial expression recognition, which will allow new researchers to understand the main components and trends in facial expression recognition. A comparison of world scientific research results has shown that the combination of traditional approaches and approaches based on deep neural networks show better classification accuracy. However, the best classification methods are artificial neural networks.


Sign in / Sign up

Export Citation Format

Share Document