scholarly journals Behaviour Based Data Dispatcher

2019 ◽  
Vol 8 (4) ◽  
pp. 12940-12944

Human life is a complex social structure. It is not possible for the humans to navigate without reading the other persons. They do it by identifying the faces. The state of response can be decided based on the mood of the opposite person. Whereas a person’s mood can be figured out by observing his emotion (Facial Gesture). The aim of the project is to construct a “Facial emotion Recognition” model using DCNN (Deep convolutional neural network) in real time. The model is constructed using DCNN as it is proven that DCNN work with greater accuracy than CNN (convolutional neural network). The facial expression of humans is very dynamic in nature it changes in split seconds whether it may be Happy, Sad, Angry, Fear, Surprise, Disgust and Neutral etc. This project is to predict the emotion of the person in real time. Our brains have neural networks which are responsible for all kinds of thinking (decision making, understanding). This model tries to develop these decisions making and classification skills by training the machine. It can classify and predict the multiple faces and different emotions at the very same time. In order to obtain higher accuracy, we take the models which are trained over thousands of datasets.

Recognition of face emotion has been a challenging task for many years. This work uses machine learning algorithms for both, a real-time image or a stored database image in the area of facial emotion recognition system. So it is very clear that, deep learning technology becomes important for Human-computer interaction (HCI) applications. The proposed system has two parts, real-time based facial emotion recognition system and also the image based facial emotion recognition system. A Convolutional Neural Network (CNN) model is used to train and test different facial emotion images in this research work. This work was executed successfully using Python 3.7.6 platform. The input Face image of a person was taken using the webcam video stream or from the standard database available for research. The five different facial emotions considered in this work are happy, surprise, angry, sad and neutral. The best recognition accuracy with the proposed system for the webcam video stream is found to be 91.2%, whereas for the input database images is found to be 90.08%.


2021 ◽  
Vol 1827 (1) ◽  
pp. 012130
Author(s):  
Qi Li ◽  
Yun Qing Liu ◽  
Yue Qi Peng ◽  
Cong Liu ◽  
Jun Shi ◽  
...  

2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.


2020 ◽  
Vol 28 (1) ◽  
pp. 97-111
Author(s):  
Nadir Kamel Benamara ◽  
Mikel Val-Calvo ◽  
Jose Ramón Álvarez-Sánchez ◽  
Alejandro Díaz-Morcillo ◽  
Jose Manuel Ferrández-Vicente ◽  
...  

Facial emotion recognition (FER) has been extensively researched over the past two decades due to its direct impact in the computer vision and affective robotics fields. However, the available datasets to train these models include often miss-labelled data due to the labellers bias that drives the model to learn incorrect features. In this paper, a facial emotion recognition system is proposed, addressing automatic face detection and facial expression recognition separately, the latter is performed by a set of only four deep convolutional neural network respect to an ensembling approach, while a label smoothing technique is applied to deal with the miss-labelled training data. The proposed system takes only 13.48 ms using a dedicated graphics processing unit (GPU) and 141.97 ms using a CPU to recognize facial emotions and reaches the current state-of-the-art performances regarding the challenging databases, FER2013, SFEW 2.0, and ExpW, giving recognition accuracies of 72.72%, 51.97%, and 71.82% respectively.


Sign in / Sign up

Export Citation Format

Share Document