scholarly journals Facial Expression Recognition with Appearance Based Features of Facial Landmarks

Author(s):  
Kanaparthi Snehitha

Artificial intelligence technology has been trying to bridge the gap between humans and machines. The latest development in this technology is Facial recognition. Facial recognition technology identifies the faces by co-relating and verifying the patterns of facial contours. Facial recognition is done by using Viola-Jones object detection framework. Facial expression is one of the important aspects in recognizing human emotions. Facial expression also helps to determine interpersonal relation between humans. Automatic facial recognition is now being used very widely in almost every field, like marketing, health care, behavioral analysis and also in human-machine interaction. Facial expression recognition helps a lot more than facial recognition. It helps the retailers to understand their customers, doctors to understand their patients, and organizations to understand their clients. For the expression recognition, we are using the landmarks of face which are appearance-based features. With the use of an active shape model, LBP (Local Binary Patterns) derives its properties from face landmarks. The operation is carried out by taking into account pixel values, which improves the rate of expression recognition. In an experiment done using previous methods and 10-fold cross validation, the accuracy achieved is 89.71%. CK+ Database is used to achieve this result.

Author(s):  
Fadi Dornaika ◽  
Bogdan Raducanu

Facial expression plays an important role in cognition of human emotions (Fasel, 2003 & Yeasin, 2006). The recognition of facial expressions in image sequences with significant head movement is a challenging problem. It is required by many applications such as human-computer interaction and computer graphics animation (Cañamero, 2005 & Picard, 2001). To classify expressions in still images many techniques have been proposed such as Neural Nets (Tian, 2001), Gabor wavelets (Bartlett, 2004), and active appearance models (Sung, 2006). Recently, more attention has been given to modeling facial deformation in dynamic scenarios. Still image classifiers use feature vectors related to a single frame to perform classification. Temporal classifiers try to capture the temporal pattern in the sequence of feature vectors related to each frame such as the Hidden Markov Model based methods (Cohen, 2003, Black, 1997 & Rabiner, 1989) and Dynamic Bayesian Networks (Zhang, 2005). The main contributions of the paper are as follows. First, we propose an efficient recognition scheme based on the detection of keyframes in videos where the recognition is performed using a temporal classifier. Second, we use the proposed method for extending the human-machine interaction functionality of a robot whose response is generated according to the user’s recognized facial expression. Our proposed approach has several advantages. First, unlike most expression recognition systems that require a frontal view of the face, our system is viewand texture-independent. Second, its learning phase is simple compared to other techniques (e.g., the Hidden Markov Models and Active Appearance Models), that is, we only need to fit second-order Auto-Regressive models to sequences of facial actions. As a result, even when the imaging conditions change the learned Auto-Regressive models need not to be recomputed. The rest of the paper is organized as follows. Section 2 summarizes our developed appearance-based 3D face tracker that we use to track the 3D head pose as well as the facial actions. Section 3 describes the proposed facial expression recognition based on the detection of keyframes. Section 4 provides some experimental results. Section 5 describes the proposed human-machine interaction application that is based on the developed facial expression recognition scheme.


Author(s):  
Padmapriya K.C. ◽  
Leelavathy V. ◽  
Angelin Gladston

The human facial expressions convey a lot of information visually. Facial expression recognition plays a crucial role in the area of human-machine interaction. Automatic facial expression recognition system has many applications in human behavior understanding, detection of mental disorders and synthetic human expressions. Recognition of facial expression by computer with high recognition rate is still a challenging task. Most of the methods utilized in the literature for the automatic facial expression recognition systems are based on geometry and appearance. Facial expression recognition is usually performed in four stages consisting of pre-processing, face detection, feature extraction, and expression classification. In this paper we applied various deep learning methods to classify the seven key human emotions: anger, disgust, fear, happiness, sadness, surprise and neutrality. The facial expression recognition system developed is experimentally evaluated with FER dataset and has resulted with good accuracy.


Author(s):  
Hai-Duong Nguyen ◽  
Soonja Yeom ◽  
Guee-Sang Lee ◽  
Hyung-Jeong Yang ◽  
In-Seop Na ◽  
...  

Emotion recognition plays an indispensable role in human–machine interaction system. The process includes finding interesting facial regions in images and classifying them into one of seven classes: angry, disgust, fear, happy, neutral, sad, and surprise. Although many breakthroughs have been made in image classification, especially in facial expression recognition, this research area is still challenging in terms of wild sampling environment. In this paper, we used multi-level features in a convolutional neural network for facial expression recognition. Based on our observations, we introduced various network connections to improve the classification task. By combining the proposed network connections, our method achieved competitive results compared to state-of-the-art methods on the FER2013 dataset.


2014 ◽  
Vol 548-549 ◽  
pp. 1110-1117 ◽  
Author(s):  
Wei Hao Zheng ◽  
Wei Wang ◽  
Yi De Ma

Facial expression recognition is a key ingredient to either emotion analysis or pattern recognition, which is also an important component in human-machine interaction. In facial expression analysis, one of the well-known methods to obtain the texture of expressions is local binary patterns (LBP) which compares pixels in local region and encodes the comparison result in forms of histogram. However, we argue that the textures of expressions are not accurate and still contain some irrelevant information, especially in the region between eyes and mouth. In this paper, we propose a compound method to recognize expressions by applying local binary patterns to global and local images processed by bidirectional principal component analysis (BDPCA) reconstruction and morphologic preprocess, respectively. It proves that our method can be applied for recognizing expressions by using texture features of global principal component and local boundary and achieves a considerable high accuracy.


2021 ◽  
Vol 8 (5) ◽  
pp. 949
Author(s):  
Fitra A. Bachtiar ◽  
Muhammad Wafi

<p><em>Human machine interaction</em>, khususnya pada <em>facial</em> <em>behavior</em> mulai banyak diperhatikan untuk dapat digunakan sebagai salah satu cara untuk personalisasi pengguna. Kombinasi ekstraksi fitur dengan metode klasifikasi dapat digunakan agar sebuah mesin dapat mengenali ekspresi wajah. Akan tetapi belum diketahui basis metode klasifikasi apa yang tepat untuk digunakan. Penelitian ini membandingkan tiga metode klasifikasi untuk melakukan klasifikasi ekspresi wajah. Dataset ekspresi wajah yang digunakan pada penelitian ini adalah JAFFE dataset dengan total 213 citra wajah yang menunjukkan 7 (tujuh) ekspresi wajah. Ekspresi wajah pada dataset tersebut yaitu <em>anger</em>, <em>disgust</em>, <em>fear</em>, <em>happy</em>, <em>neutral</em>, <em>sadness</em>, dan <em>surprised</em>. Facial Landmark digunakan sebagai ekstraksi fitur wajah. Model klasifikasi yang digunakan pada penelitian ini adalah ELM, SVM, dan <em>k</em>-NN. Masing masing model klasifikasi akan dicari nilai parameter terbaik dengan menggunakan 80% dari total data. 5- <em>fold</em> <em>cross-validation</em> digunakan untuk mencari parameter terbaik. Pengujian model dilakukan dengan 20% data dengan metode evaluasi akurasi, F1 Score, dan waktu komputasi. Nilai parameter terbaik pada ELM adalah menggunakan 40 hidden neuron, SVM dengan nilai  = 10<sup>5</sup> dan 200 iterasi, sedangkan untuk <em>k</em>-NN menggunakan 3 <em>k</em> tetangga. Hasil uji menggunakan parameter tersebut menunjukkan ELM merupakan algoritme terbaik diantara ketiga model klasifikasi tersebut. Akurasi dan F1 Score untuk klasifikasi ekspresi wajah untuk ELM mendapatkan nilai akurasi sebesar 0.76 dan F1 Score 0.76, sedangkan untuk waktu komputasi membutuhkan waktu 6.97´10<sup>-3</sup> detik.   </p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract">H<em>uman-machine interaction, especially facial behavior is considered to be use in user personalization. Feature extraction and classification model combinations can be used for a machine to understand the human facial expression. However, which classification base method should be used is not yet known. This study compares three classification methods for facial expression recognition. JAFFE dataset is used in this study with a total of 213 facial images which shows seven facial expressions. The seven facial expressions are anger, disgust, fear, happy, neutral, sadness, dan surprised. Facial Landmark is used as a facial component features. The classification model used in this study is ELM, SVM, and k-NN. The hyperparameter of each model is searched using 80% of the total data. 5-fold cross-validation is used to find the hyperparameter. The testing is done using 20% of the data and evaluated using accuracy, F1 Score, and computation time. The hyperparameter for ELM is 40 hidden neurons, SVM with  = 105 and 200 iteration, while k-NN used 3 k neighbors. The experiment results show that ELM outperforms other classification methods. The accuracy and F1 Score achieved by ELM is 0.76 and 0.76, respectively. Meanwhile, time computation takes 6.97 10<sup>-3</sup> seconds.      </em></p>


2019 ◽  
Vol 9 (21) ◽  
pp. 4678 ◽  
Author(s):  
Daniel Canedo ◽  
António J. R. Neves

Emotion recognition has attracted major attention in numerous fields because of its relevant applications in the contemporary world: marketing, psychology, surveillance, and entertainment are some examples. It is possible to recognize an emotion through several ways; however, this paper focuses on facial expressions, presenting a systematic review on the matter. In addition, 112 papers published in ACM, IEEE, BASE and Springer between January 2006 and April 2019 regarding this topic were extensively reviewed. Their most used methods and algorithms will be firstly introduced and summarized for a better understanding, such as face detection, smoothing, Principal Component Analysis (PCA), Local Binary Patterns (LBP), Optical Flow (OF), Gabor filters, among others. This review identified a clear difficulty in translating the high facial expression recognition (FER) accuracy in controlled environments to uncontrolled and pose-variant environments. The future efforts in the FER field should be put into multimodal systems that are robust enough to face the adversities of real world scenarios. A thorough analysis on the research done on FER in Computer Vision based on the selected papers is presented. This review aims to not only become a reference for future research on emotion recognition, but also to provide an overview of the work done in this topic for potential readers.


Sign in / Sign up

Export Citation Format

Share Document