Automatic Facial Expression Recognition System Using Shape-Information-Matrix (SIM)

2020 ◽  
Vol 9 (4) ◽  
pp. 34-51
Author(s):  
Avishek Nandi ◽  
Paramartha Dutta ◽  
Md Nasir

Automatic recognition of facial expressions and modeling of human expressions are very essential in the field of affective computing. The authors have introduced a novel geometric and texture-based method to extract the shapio-geometric features from an image computed by landmarking the geometric locations of facial components using the active appearance model (AAM). Expression-specific analysis of facial landmark points is carried out to select a set of landmark points for each expression to identify features for each specific expression. The shape information matrix (SIM) is constructed the set salient landmark points assign to an expression. Finally, the histogram-oriented gradients (HoG) of SIM are computed which is used for classification with multi-layer perceptron (MLP). The proposed method is tested and validated on four well-known benchmark databases, which are CK+, JAFFE, MMI, and MUG. The proposed system achieved 98.5%, 97.6%, 96.4%, and 97.0% accuracy in CK+, JAFFE, MMI, and MUG database, respectively.

Author(s):  
Mahima Agrawal ◽  
Shubangi. D. Giripunje ◽  
P. R. Bajaj

This paper presents an efficient method of recognition of facial expressions in a video. The works proposes highly efficient facial expression recognition system using PCA optimized by Genetic Algorithm .Reduced computational time and comparable efficiency in terms of its ability to recognize correctly are the benchmarks of this work. Video sequences contain more information than still images hence are in the research subject now-a-days and have much more activities during the expression actions. We use PCA, a statistical method to reduce the dimensionality and are used to extract features with the help of covariance analysis to generate Eigen –components of the images. The Eigen-components as a feature input is optimized by Genetic algorithm to reduce the computation cost.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


2021 ◽  
Vol 2 (1) ◽  
pp. 26-32
Author(s):  
Moe Moe Htay

Facial Expression is a significant role in affective computing and one of the non-verbal communication for human computer interaction. Automatic recognition of human affects has become more challenging and interesting problem in recent years. Facial Expression is the significant features to recognize the human emotion in human daily life. Facial Expression Recognition System (FERS) can be developed for the application of human affect analysis, health care assessment, distance learning, driver fatigue detection and human computer interaction. Basically, there are three main components to recognize the human facial expression. They are face or face’s components detection, feature extraction of face image, classification of expression. The study proposed the methods of feature extraction and classification for FER.


Facial Expression Recognition (FER) has gained significant importance in the research field of Affective Computing in different extents. As a part of the different dimensional thinking, aiming at improving the accuracy of the recognition system and reducing the computational load, region based FER is proposed in this paper. The system is an emotion identifying system among the basic emotions, through subject independent template matching based on gradient directions. The model designed is tested on the Enhanced Cohn-Kanade (CK+) dataset. Another important contribution of the work is using only eye (including eyebrows and the nose portion near eyes) and mouth regions in the emotion recognition. The emotion classification result is 94.3% (CK+ dataset) for 6-class FER.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Hong-Min Zhu ◽  
Chi-Man Pun

We propose an adaptive and robust superpixel based hand gesture tracking system, in which hand gestures drawn in free air are recognized from their motion trajectories. First we employed the motion detection of superpixels and unsupervised image segmentation to detect the moving target hand using the first few frames of the input video sequence. Then the hand appearance model is constructed from its surrounding superpixels. By incorporating the failure recovery and template matching in the tracking process, the target hand is tracked by an adaptive superpixel based tracking algorithm, where the problem of hand deformation, view-dependent appearance invariance, fast motion, and background confusion can be well handled to extract the correct hand motion trajectory. Finally, the hand gesture is recognized by the extracted motion trajectory with a trained SVM classifier. Experimental results show that our proposed system can achieve better performance compared to the existing state-of-the-art methods with the recognition accuracy 99.17% for easy set and 98.57 for hard set.


Sign in / Sign up

Export Citation Format

Share Document