scholarly journals An Evolutive Approach for Smile Recognition in Video Sequences

Author(s):  
David Freire-Obregón ◽  
Modesto Castrillón-Santana

Facial expression recognition is one of the most challenging research areas in the image recognition field and has been actively studied since the 70's. For instance, smile recognition has been studied due to the fact that it is considered an important facial expression in human communication, it is therefore likely useful for human–machine interaction. Moreover, if a smile can be detected and also its intensity estimated, it will raise the possibility of new applications in the future. We are talking about quantifying the emotion at low computation cost and high accuracy. For this aim, we have used a new support vector machine (SVM)-based approach that integrates a weighted combination of local binary patterns (LBPs)-and principal component analysis (PCA)-based approaches. Furthermore, we construct this smile detector considering the evolution of the emotion along its natural life cycle. As a consequence, we achieved both low computation cost and high performance with video sequences.

2014 ◽  
Vol 548-549 ◽  
pp. 1110-1117 ◽  
Author(s):  
Wei Hao Zheng ◽  
Wei Wang ◽  
Yi De Ma

Facial expression recognition is a key ingredient to either emotion analysis or pattern recognition, which is also an important component in human-machine interaction. In facial expression analysis, one of the well-known methods to obtain the texture of expressions is local binary patterns (LBP) which compares pixels in local region and encodes the comparison result in forms of histogram. However, we argue that the textures of expressions are not accurate and still contain some irrelevant information, especially in the region between eyes and mouth. In this paper, we propose a compound method to recognize expressions by applying local binary patterns to global and local images processed by bidirectional principal component analysis (BDPCA) reconstruction and morphologic preprocess, respectively. It proves that our method can be applied for recognizing expressions by using texture features of global principal component and local boundary and achieves a considerable high accuracy.


2019 ◽  
Vol 9 (21) ◽  
pp. 4678 ◽  
Author(s):  
Daniel Canedo ◽  
António J. R. Neves

Emotion recognition has attracted major attention in numerous fields because of its relevant applications in the contemporary world: marketing, psychology, surveillance, and entertainment are some examples. It is possible to recognize an emotion through several ways; however, this paper focuses on facial expressions, presenting a systematic review on the matter. In addition, 112 papers published in ACM, IEEE, BASE and Springer between January 2006 and April 2019 regarding this topic were extensively reviewed. Their most used methods and algorithms will be firstly introduced and summarized for a better understanding, such as face detection, smoothing, Principal Component Analysis (PCA), Local Binary Patterns (LBP), Optical Flow (OF), Gabor filters, among others. This review identified a clear difficulty in translating the high facial expression recognition (FER) accuracy in controlled environments to uncontrolled and pose-variant environments. The future efforts in the FER field should be put into multimodal systems that are robust enough to face the adversities of real world scenarios. A thorough analysis on the research done on FER in Computer Vision based on the selected papers is presented. This review aims to not only become a reference for future research on emotion recognition, but also to provide an overview of the work done in this topic for potential readers.


2014 ◽  
Vol 905 ◽  
pp. 537-542 ◽  
Author(s):  
Chun Han Wang ◽  
Hong Wang ◽  
Zhi Na Li

Gabor filter bank can effectively extract the facial expression characteristic information, but the characteristic dimension is too high and needed dimension reduction. If directly use supervised locally linear embedding (SLLE) to reduce dimension. The algorithm needs large memory, calculate for a long time. In order to solve this problem, this article uses fast principal component analysis (FastPCA) to reduce dimension firstly, keeping the basic information in the expression without missing. Use SLLE processing for further dimension reduction and make a sample to distinguish the different expression more apparent. Finally use support vector machine to classify, doing experiments performed on the JAFFE database indicate the efficiency of the proposed method.


Author(s):  
Kanaparthi Snehitha

Artificial intelligence technology has been trying to bridge the gap between humans and machines. The latest development in this technology is Facial recognition. Facial recognition technology identifies the faces by co-relating and verifying the patterns of facial contours. Facial recognition is done by using Viola-Jones object detection framework. Facial expression is one of the important aspects in recognizing human emotions. Facial expression also helps to determine interpersonal relation between humans. Automatic facial recognition is now being used very widely in almost every field, like marketing, health care, behavioral analysis and also in human-machine interaction. Facial expression recognition helps a lot more than facial recognition. It helps the retailers to understand their customers, doctors to understand their patients, and organizations to understand their clients. For the expression recognition, we are using the landmarks of face which are appearance-based features. With the use of an active shape model, LBP (Local Binary Patterns) derives its properties from face landmarks. The operation is carried out by taking into account pixel values, which improves the rate of expression recognition. In an experiment done using previous methods and 10-fold cross validation, the accuracy achieved is 89.71%. CK+ Database is used to achieve this result.


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Lei Zhao ◽  
Zengcai Wang ◽  
Guoxin Zhang

This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP) feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM) classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK) + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.


Author(s):  
Li Yao ◽  
Yan Wan ◽  
Hongjie Ni ◽  
Bugao Xu

AbstractAutomatic facial expression analysis remains challenging due to its low recognition accuracy and poor robustness. In this study, we utilized active learning and support vector machine (SVM) algorithms to classify facial action units (AU) for human facial expression recognition. Active learning was used to detect the targeted facial expression AUs, while an SVM was utilized to classify different AUs and ultimately map them to their corresponding facial expressions. Active learning reduces the number of non-support vectors in the training sample set and shortens the labeling and training times without affecting the performance of the classifier, thereby reducing the cost of labeling samples and improving the training speed. Experimental results show that the proposed algorithm can effectively suppress correlated noise and achieve higher recognition rates than principal component analysis and a human observer on seven different facial expressions.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Ben Niu ◽  
Zhenxing Gao ◽  
Bingbing Guo

Emotion plays an important role in communication. For human–computer interaction, facial expression recognition has become an indispensable part. Recently, deep neural networks (DNNs) are widely used in this field and they overcome the limitations of conventional approaches. However, application of DNNs is very limited due to excessive hardware specifications requirement. Considering low hardware specifications used in real-life conditions, to gain better results without DNNs, in this paper, we propose an algorithm with the combination of the oriented FAST and rotated BRIEF (ORB) features and Local Binary Patterns (LBP) features extracted from facial expression. First of all, every image is passed through face detection algorithm to extract more effective features. Second, in order to increase computational speed, the ORB and LBP features are extracted from the face region; specifically, region division is innovatively employed in the traditional ORB to avoid the concentration of the features. The features are invariant to scale and grayscale as well as rotation changes. Finally, the combined features are classified by Support Vector Machine (SVM). The proposed method is evaluated on several challenging databases such as Cohn-Kanade database (CK+), Japanese Female Facial Expressions database (JAFFE), and MMI database; experimental results of seven emotion state (neutral, joy, sadness, surprise, anger, fear, and disgust) show that the proposed framework is effective and accurate.


Sign in / Sign up

Export Citation Format

Share Document