Facial Expression Recognition Using Directional Local Binary Pattern

2014 ◽  
Vol 701-702 ◽  
pp. 395-399
Author(s):  
Ying Tong ◽  
Kun Wang ◽  
Liang Bao Jiao

Local binary pattern (LBP) descriptor could not efficiently describe the gray change in different directions of facial expressions characteristic regions. For this, the directional local binary pattern (DLBP) is put forward to represent facial geometrical characteristic. DLBP encodes the directional information of the face’s facial textures in horizontal, vertical and diagonal three directions, which can effectively describe the characteristic of facial muscles, wrinkles and other local deformation. Experimental results on JAFFE databases demonstrate the algorithm’s effectiveness, where nearly 5 percent recognition rate improvement is obtained beyond traditional LBP. Additional experiments verify robustness and reliability of the proposed DLBP operator within Gaussian white noise and pepper salt noise.

Author(s):  
Gopal Krishan Prajapat ◽  
Rakesh Kumar

Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Huma Qayyum ◽  
Muhammad Majid ◽  
Syed Muhammad Anwar ◽  
Bilal Khan

Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.


Information ◽  
2019 ◽  
Vol 10 (12) ◽  
pp. 375 ◽  
Author(s):  
Yingying Wang ◽  
Yibin Li ◽  
Yong Song ◽  
Xuewen Rong

As an important part of emotion research, facial expression recognition is a necessary requirement in human–machine interface. Generally, a face expression recognition system includes face detection, feature extraction, and feature classification. Although great success has been made by the traditional machine learning methods, most of them have complex computational problems and lack the ability to extract comprehensive and abstract features. Deep learning-based methods can realize a higher recognition rate for facial expressions, but a large number of training samples and tuning parameters are needed, and the hardware requirement is very high. For the above problems, this paper proposes a method combining features that extracted by the convolutional neural network (CNN) with the C4.5 classifier to recognize facial expressions, which not only can address the incompleteness of handcrafted features but also can avoid the high hardware configuration in the deep learning model. Considering some problems of overfitting and weak generalization ability of the single classifier, random forest is applied in this paper. Meanwhile, this paper makes some improvements for C4.5 classifier and the traditional random forest in the process of experiments. A large number of experiments have proved the effectiveness and feasibility of the proposed method.


Author(s):  
Gopal Krishan Prajapat ◽  
Rakesh Kumar

Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.


Author(s):  
NHAN THI CAO ◽  
AN HOA TON-THAT ◽  
HYUNG IL CHOI

Facial expression recognition has been researched much in recent years because of their applications in intelligent communication systems. Many methods have been developed based on extracting Local Binary Pattern (LBP) features associating different classifying techniques in order to get more and more better effects of facial expression recognition. In this work, we propose a novel method for recognizing facial expressions based on Local Binary Pattern features and Support Vector Machine with two effective improvements. First is the preprocessing step and second is the method of dividing face images into nonoverlap square regions for extracting LBP features. The method was experimented on three typical kinds of database: small (213 images), medium (2040 images) and large (5130 images). Experimental results show the effectiveness of our method for obtaining remarkably better recognition rate in comparison with other methods.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Yusra Khalid Bhatti ◽  
Afshan Jamil ◽  
Nudrat Nida ◽  
Muhammad Haroon Yousaf ◽  
Serestina Viriri ◽  
...  

Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall.


2021 ◽  
Vol 9 (5) ◽  
pp. 1141-1152
Author(s):  
Muazu Abdulwakil Auma ◽  
Eric Manzi ◽  
Jibril Aminu

Facial recognition is integral and essential in todays society, and the recognition of emotions based on facial expressions is already becoming more usual. This paper analytically provides an overview of the databases of video data of facial expressions and several approaches to recognizing emotions by facial expressions by including the three main image analysis stages, which are pre-processing, feature extraction, and classification. The paper presents approaches based on deep learning using deep neural networks and traditional means to recognizing human emotions based on visual facial features. The current results of some existing algorithms are presented. When reviewing scientific and technical literature, the focus was mainly on sources containing theoretical and research information of the methods under consideration and comparing traditional techniques and methods based on deep neural networks supported by experimental research. An analysis of scientific and technical literature describing methods and algorithms for analyzing and recognizing facial expressions and world scientific research results has shown that traditional methods of classifying facial expressions are inferior in speed and accuracy to artificial neural networks. This reviews main contributions provide a general understanding of modern approaches to facial expression recognition, which will allow new researchers to understand the main components and trends in facial expression recognition. A comparison of world scientific research results has shown that the combination of traditional approaches and approaches based on deep neural networks show better classification accuracy. However, the best classification methods are artificial neural networks.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


Sign in / Sign up

Export Citation Format

Share Document