Facial Expression Recognition Based on the Texture Features of Global Principal Component and Local Boundary

2014 ◽  
Vol 548-549 ◽  
pp. 1110-1117 ◽  
Author(s):  
Wei Hao Zheng ◽  
Wei Wang ◽  
Yi De Ma

Facial expression recognition is a key ingredient to either emotion analysis or pattern recognition, which is also an important component in human-machine interaction. In facial expression analysis, one of the well-known methods to obtain the texture of expressions is local binary patterns (LBP) which compares pixels in local region and encodes the comparison result in forms of histogram. However, we argue that the textures of expressions are not accurate and still contain some irrelevant information, especially in the region between eyes and mouth. In this paper, we propose a compound method to recognize expressions by applying local binary patterns to global and local images processed by bidirectional principal component analysis (BDPCA) reconstruction and morphologic preprocess, respectively. It proves that our method can be applied for recognizing expressions by using texture features of global principal component and local boundary and achieves a considerable high accuracy.

2019 ◽  
Vol 9 (21) ◽  
pp. 4678 ◽  
Author(s):  
Daniel Canedo ◽  
António J. R. Neves

Emotion recognition has attracted major attention in numerous fields because of its relevant applications in the contemporary world: marketing, psychology, surveillance, and entertainment are some examples. It is possible to recognize an emotion through several ways; however, this paper focuses on facial expressions, presenting a systematic review on the matter. In addition, 112 papers published in ACM, IEEE, BASE and Springer between January 2006 and April 2019 regarding this topic were extensively reviewed. Their most used methods and algorithms will be firstly introduced and summarized for a better understanding, such as face detection, smoothing, Principal Component Analysis (PCA), Local Binary Patterns (LBP), Optical Flow (OF), Gabor filters, among others. This review identified a clear difficulty in translating the high facial expression recognition (FER) accuracy in controlled environments to uncontrolled and pose-variant environments. The future efforts in the FER field should be put into multimodal systems that are robust enough to face the adversities of real world scenarios. A thorough analysis on the research done on FER in Computer Vision based on the selected papers is presented. This review aims to not only become a reference for future research on emotion recognition, but also to provide an overview of the work done in this topic for potential readers.


Author(s):  
David Freire-Obregón ◽  
Modesto Castrillón-Santana

Facial expression recognition is one of the most challenging research areas in the image recognition field and has been actively studied since the 70's. For instance, smile recognition has been studied due to the fact that it is considered an important facial expression in human communication, it is therefore likely useful for human–machine interaction. Moreover, if a smile can be detected and also its intensity estimated, it will raise the possibility of new applications in the future. We are talking about quantifying the emotion at low computation cost and high accuracy. For this aim, we have used a new support vector machine (SVM)-based approach that integrates a weighted combination of local binary patterns (LBPs)-and principal component analysis (PCA)-based approaches. Furthermore, we construct this smile detector considering the evolution of the emotion along its natural life cycle. As a consequence, we achieved both low computation cost and high performance with video sequences.


Author(s):  
Kanaparthi Snehitha

Artificial intelligence technology has been trying to bridge the gap between humans and machines. The latest development in this technology is Facial recognition. Facial recognition technology identifies the faces by co-relating and verifying the patterns of facial contours. Facial recognition is done by using Viola-Jones object detection framework. Facial expression is one of the important aspects in recognizing human emotions. Facial expression also helps to determine interpersonal relation between humans. Automatic facial recognition is now being used very widely in almost every field, like marketing, health care, behavioral analysis and also in human-machine interaction. Facial expression recognition helps a lot more than facial recognition. It helps the retailers to understand their customers, doctors to understand their patients, and organizations to understand their clients. For the expression recognition, we are using the landmarks of face which are appearance-based features. With the use of an active shape model, LBP (Local Binary Patterns) derives its properties from face landmarks. The operation is carried out by taking into account pixel values, which improves the rate of expression recognition. In an experiment done using previous methods and 10-fold cross validation, the accuracy achieved is 89.71%. CK+ Database is used to achieve this result.


Author(s):  
Gopal Krishan Prajapat ◽  
Rakesh Kumar

Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.


JOUTICA ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 484
Author(s):  
Resty Wulanningrum ◽  
Anggi Nur Fadzila ◽  
Danar Putra Pamungkas

Manusia secara alami menggunakan ekspresi wajah untuk berkomunikasi dan menunjukan emosi mereka dalam berinteraksi sosial. Ekspresi wajah termasuk kedalam komunikasi non-verbal yang dapat menyampaikan keadaan emosi seseorang kepada orang yang telah mengamatinya. Penelitian ini menggunakan metode Principal Component Analysis (PCA) untuk proses ekstraksi ciri pada citra ekspresi dan metode Convolutional Neural Network (CNN) sebagai prosesi klasifikasi emosi, dengan menggunakan data Facial Expression Recognition-2013 (FER-2013) dilakukan proses training dan testing untuk menghasilkan nilai akurasi dan pengenalan emosi wajah. Hasil pengujian akhir mendapatkan nilai akurasi pada metode PCA sebesar 59,375% dan nilai akurasi pada pengujian metode CNN sebesar 59,386%.


2014 ◽  
Vol 34 (5) ◽  
pp. 0515001
Author(s):  
李雅倩 Li Yaqian ◽  
李颖杰 Li Yingjie ◽  
李海滨 Li Haibin ◽  
张强 Zhang Qiang ◽  
张文明 Zhang Wenming

Author(s):  
Gopal Krishan Prajapat ◽  
Rakesh Kumar

Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.


Author(s):  
Fadi Dornaika ◽  
Bogdan Raducanu

Facial expression plays an important role in cognition of human emotions (Fasel, 2003 & Yeasin, 2006). The recognition of facial expressions in image sequences with significant head movement is a challenging problem. It is required by many applications such as human-computer interaction and computer graphics animation (Cañamero, 2005 & Picard, 2001). To classify expressions in still images many techniques have been proposed such as Neural Nets (Tian, 2001), Gabor wavelets (Bartlett, 2004), and active appearance models (Sung, 2006). Recently, more attention has been given to modeling facial deformation in dynamic scenarios. Still image classifiers use feature vectors related to a single frame to perform classification. Temporal classifiers try to capture the temporal pattern in the sequence of feature vectors related to each frame such as the Hidden Markov Model based methods (Cohen, 2003, Black, 1997 & Rabiner, 1989) and Dynamic Bayesian Networks (Zhang, 2005). The main contributions of the paper are as follows. First, we propose an efficient recognition scheme based on the detection of keyframes in videos where the recognition is performed using a temporal classifier. Second, we use the proposed method for extending the human-machine interaction functionality of a robot whose response is generated according to the user’s recognized facial expression. Our proposed approach has several advantages. First, unlike most expression recognition systems that require a frontal view of the face, our system is viewand texture-independent. Second, its learning phase is simple compared to other techniques (e.g., the Hidden Markov Models and Active Appearance Models), that is, we only need to fit second-order Auto-Regressive models to sequences of facial actions. As a result, even when the imaging conditions change the learned Auto-Regressive models need not to be recomputed. The rest of the paper is organized as follows. Section 2 summarizes our developed appearance-based 3D face tracker that we use to track the 3D head pose as well as the facial actions. Section 3 describes the proposed facial expression recognition based on the detection of keyframes. Section 4 provides some experimental results. Section 5 describes the proposed human-machine interaction application that is based on the developed facial expression recognition scheme.


Sign in / Sign up

Export Citation Format

Share Document