scholarly journals Robust hybrid framework for automatic facial expression recognition

2018 ◽  
Vol 7 (2) ◽  
pp. 568
Author(s):  
Gunavathi H S ◽  
Siddappa M

Over the last few years, facial expression recognition is an active research field, which has an extensive range of applications in the area of social interaction, social intelligence, autism detection and Human-computer interaction. In this paper, a   robust hybrid framework is presented to recognize the facial expressions, which enhances the efficiency and speed of recognition system by extracting significant features of a face. In the proposed framework, feature representation and extraction are done by using Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG). Later, the dimensionalities of the obtained features are reduced using Compressive Sensing (CS) algorithm and classified using multiclass SVM classifier. We investigated the performance of the proposed hybrid framework on two public databases such as CK+ and JAFFE data sets. The investigational results show that the proposed hybrid framework is a promising framework for recognizing and identifying facial expressions with varying illuminations and poses in real time.

Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


2008 ◽  
Vol 381-382 ◽  
pp. 375-378
Author(s):  
K.T. Song ◽  
M.J. Han ◽  
F.Y. Chang ◽  
S.H. Chang

The capability of recognizing human facial expression plays an important role in advanced human-robot interaction development. Through recognizing facial expressions, a robot can interact with a user in a more natural and friendly manner. In this paper, we proposed a facial expression recognition system based on an embedded image processing platform to classify different facial expressions on-line in real time. A low-cost embedded vision system has been designed and realized for robotic applications using a CMOS image sensor and digital signal processor (DSP). The current design acquires thirty 640x480 image frames per second (30 fps). The proposed emotion recognition algorithm has been successfully implemented on the real-time vision system. Experimental results on a pet robot show that the robot can interact with a person in a responding manner. The developed image processing platform is effective for accelerating the recognition speed to 25 recognitions per second with an average on-line recognition rate of 74.4% for five facial expressions.


Author(s):  
M. Sultan Zia ◽  
Majid Hussain ◽  
M. Arfan Jaffar

Facial expressions recognition is a crucial task in pattern recognition and it becomes even crucial when cross-cultural emotions are encountered. Various studies in the past have shown that all the facial expressions are not innate and universal, but many of them are learned and culture-dependent. Extreme facial expression recognition methods employ different datasets for training and later use it for testing and demostrate high accuracy in recognition. Their performances degrade drastically when expression images are taken from different cultures. Moreover, there are many existing facial expression patterns which cannot be generated and used as training data in single training session. A facial expression recognition system can maintain its high accuracy and robustness globally and for a longer period if the system possesses the ability to learn incrementally. We also propose a novel classification algorithm for multinomial classification problems. It is an efficient classifier and can be a good choice for base classifier in real-time applications. We propose a facial expression recognition system that can learn incrementally. We use Local Binary Pattern (LBP) features to represent the expression space. The performance of the system is tested on static images from six different databases containing expressions from various cultures. The experiments using the incremental learning classification demonstrate promising results.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


Author(s):  
Shubhrata Gupta ◽  
Keshri Verma ◽  
Nazil Perveen

Facial expression is one of the most powerful, natural, and abrupt means for human beings which have the knack to communicate emotion and regulate inter-personal behaviour. In this paper we present a novel approach for facial expression detection using decision tree. Facial expression information is mostly concentrate on facial expression information regions, so the mouth, eye and eyebrow regions are segmented from the facial expression images firstly. Using these templates we calculate 30 facial characteristics points (FCP’s). These facial characteristic points describe the position and shape of the above three organs to find diverse parameters which are input to the decision tree for recognizing different facial expressions.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhi Yao ◽  
Hailing Sun ◽  
Guofu Zhou

Facial video big sensor data (BSD) is the core data of wireless sensor network industry application and technology research. It plays an important role in many industries, such as urban safety management, unmanned driving, senseless attendance, and venue management. The construction of video big sensor data security application and intelligent algorithm model has become a hot and difficult topic in related fields based on facial expression recognition. This paper focused on the experimental analysis of Cohn–Kanade dataset plus (CK+) dataset with frontal pose and great clarity. Firstly, face alignment and the selection of peak image were utilized to preprocess the expression sequence. Then, the output vector from convolution network 1 and β-VAE were connected proportionally and input to support vector machine (SVM) classifier to complete facial expression recognition. The testing accuracy of the proposed model in CK + dataset can reach 99.615%. The number of expression sequences involved in training was 2417, and the number of expression sequences in testing was 519.


2020 ◽  
Vol 24 (6) ◽  
pp. 1455-1476
Author(s):  
Xuejian Wang ◽  
Michael C. Fairhurst ◽  
Anne M.P. Canuto

Although several automatic computer systems have been proposed to address facial expression recognition problems, the majority of them still fail to cope with some requirements of many practical application scenarios. In this paper, one of the most influential and common issues raised in practical application scenarios when applying automatic facial expression recognition system, head pose variation, is comprehensively explored and investigated. In order to do this, two novel texture feature representations are proposed for implementing multi-view facial expression recognition systems in practical environments. These representations combine the block-based techniques with Local Ternary Pattern-based features, providing a more informative and efficient feature representation of the facial images. In addition, an in-house multi-view facial expression database has been designed and collected to allow us to conduct a detailed research study of the effect of out-of-plane pose angles on the performance of a multi-view facial expression recognition system. Along with the proposed in-house dataset, the proposed system is tested on two well-known facial expression databases, CK+ and BU-3DFE datasets. The obtained results shows that the proposed system outperforms current state-of-the-art 2D facial expression systems in the presence of pose variations.


Emotion recognition is a prominent tough problem in machine vision systems. The significant way humans show emotions is through facial expressions. In this paper we used a 2D image processing method to recognize the facial expression by extracting of features. The proposed algorithm passes through few preprocessing steps initially. And then the preprocessed image is partitioned into two main parts Eyes and Mouth. To identify the emotions Bezier curves are drawn for main parts. The experimental result shows that the proposed technique is 80% to 85% accurate.


Sign in / Sign up

Export Citation Format

Share Document