A Customized Convolutional Neural Network Design Using Improved Softmax Layer for Real-time Human Emotion Recognition

Author(s):  
Kai-Yen Wang ◽  
Yu-De Huang ◽  
Yun-Lung Ho ◽  
Wai-Chi Fang

Recognition of face emotion has been a challenging task for many years. This work uses machine learning algorithms for both, a real-time image or a stored database image in the area of facial emotion recognition system. So it is very clear that, deep learning technology becomes important for Human-computer interaction (HCI) applications. The proposed system has two parts, real-time based facial emotion recognition system and also the image based facial emotion recognition system. A Convolutional Neural Network (CNN) model is used to train and test different facial emotion images in this research work. This work was executed successfully using Python 3.7.6 platform. The input Face image of a person was taken using the webcam video stream or from the standard database available for research. The five different facial emotions considered in this work are happy, surprise, angry, sad and neutral. The best recognition accuracy with the proposed system for the webcam video stream is found to be 91.2%, whereas for the input database images is found to be 90.08%.


2021 ◽  
Vol 3 ◽  
Author(s):  
James Ren Lee ◽  
Linda Wang ◽  
Alexander Wong

While recent advances in deep learning have led to significant improvements in facial expression classification (FEC), a major challenge that remains a bottleneck for the widespread deployment of such systems is their high architectural and computational complexities. This is especially challenging given the operational requirements of various FEC applications, such as safety, marketing, learning, and assistive living, where real-time requirements on low-cost embedded devices is desired. Motivated by this need for a compact, low latency, yet accurate system capable of performing FEC in real-time on low-cost embedded devices, this study proposes EmotionNet Nano, an efficient deep convolutional neural network created through a human-machine collaborative design strategy, where human experience is combined with machine meticulousness and speed in order to craft a deep neural network design catered toward real-time embedded usage. To the best of the author’s knowledge, this is the very first deep neural network architecture for facial expression recognition leveraging machine-driven design exploration in its design process, and exhibits unique architectural characteristics such as high architectural heterogeneity and selective long-range connectivity not seen in previous FEC network architectures. Two different variants of EmotionNet Nano are presented, each with a different trade-off between architectural and computational complexity and accuracy. Experimental results using the CK + facial expression benchmark dataset demonstrate that the proposed EmotionNet Nano networks achieved accuracy comparable to state-of-the-art FEC networks, while requiring significantly fewer parameters. Furthermore, we demonstrate that the proposed EmotionNet Nano networks achieved real-time inference speeds (e.g., >25 FPS and >70 FPS at 15 and 30 W, respectively) and high energy efficiency (e.g., >1.7 images/sec/watt at 15 W) on an ARM embedded processor, thus further illustrating the efficacy of EmotionNet Nano for deployment on embedded devices.


2019 ◽  
Vol 8 (4) ◽  
pp. 12940-12944

Human life is a complex social structure. It is not possible for the humans to navigate without reading the other persons. They do it by identifying the faces. The state of response can be decided based on the mood of the opposite person. Whereas a person’s mood can be figured out by observing his emotion (Facial Gesture). The aim of the project is to construct a “Facial emotion Recognition” model using DCNN (Deep convolutional neural network) in real time. The model is constructed using DCNN as it is proven that DCNN work with greater accuracy than CNN (convolutional neural network). The facial expression of humans is very dynamic in nature it changes in split seconds whether it may be Happy, Sad, Angry, Fear, Surprise, Disgust and Neutral etc. This project is to predict the emotion of the person in real time. Our brains have neural networks which are responsible for all kinds of thinking (decision making, understanding). This model tries to develop these decisions making and classification skills by training the machine. It can classify and predict the multiple faces and different emotions at the very same time. In order to obtain higher accuracy, we take the models which are trained over thousands of datasets.


Currently a the very beginning's of the unsolved difficulty in pc imaginative and prescient is perceiving or understanding different people' feelings and sentiments. Albeit ongoing strategies accomplish close to human exactness in controlled conditions, the acknowledgment of emotions inside the wild remains a hard difficulty. On this paper we proposed MAM Pooling (mean of common and maximum) strategy with CNN to perceive human feelings. We center round programmed distinguishing evidence of six emotions constantly: Happiness, Anger, unhappiness, surprise, fear, and Disgust. Convolutional Neural network (CNN) is a certainly propelled trainable layout that may study invariant highlights for numerous programs. Whilst all is said in carried out, CNNs include of rotating convolutional layers, non-linearity layers and highlight pooling layers. In this artwork, a Novel include pooling approach, named as MAM pooling is proposed to regularize CNNs, which replaces the deterministic pooling obligations with a stochastic system thru taking the ordinary of max pooling and regular pooling strategies. The benefit of the proposed MAM pooling technique lies in its firstrate capability to address the over fitting problem skilled with the resource of CNN age.


Sign in / Sign up

Export Citation Format

Share Document