scholarly journals An Effective Combination of Textures and Wavelet Features for Facial Expression Recognition

2021 ◽  
Vol 11 (3) ◽  
pp. 7172-7176
Author(s):  
S. M. Hassan ◽  
A. Alghamdi ◽  
A. Hafeez ◽  
M. Hamdi ◽  
I. Hussain ◽  
...  

In order to explore the accompanying examination goals for facial expression recognition, a proper combination of classification and adequate feature extraction is necessary. If inadequate features are used, even the best classifier could fail to achieve accurate recognition. In this paper, a new fusion technique for human facial expression recognition is used to accurately recognize human facial expressions. A combination of Discrete Wavelet Features (DWT), Local Binary Pattern (LBP), and Histogram of Gradients (HoG) feature extraction techniques was used to investigate six human emotions. K-Nearest Neighbors (KNN), Decision Tree (DT), Multi-Layer Perceptron (MLP), and Random Forest (RF) were chosen for classification. These algorithms were implemented and tested on the Static Facial Expression in Wild (SWEW) dataset which consists of facial expressions of high accuracy. The proposed algorithm exhibited 87% accuracy which is higher than the accuracy of the individual algorithms.

Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


2014 ◽  
Vol 1049-1050 ◽  
pp. 1522-1525
Author(s):  
Wang Ju ◽  
Ding Rui ◽  
Chun Yan Nie

In such a developed day of information communication, communication is an important essential way of interpersonal communication. As a carrier of information, expression is rich in human behavior information. Facial expression recognition is a combination of many fields, but also a new topic in the field of pattern recognition. This paper mainly studied the facial feature extraction based on MATLAB, by MATLAB software, extracting the expression features through a large number of facial expressions, which can be divided into different facial expressions more accurate classification .


2020 ◽  
pp. 59-69
Author(s):  
Walid Mahmod ◽  
Jane Stephan ◽  
Anmar Razzak

Automatic analysis of facial expressions is rapidly becoming an area of intense interest in computer vision and artificial intelligence research communities. In this paper an approach is presented for facial expression recognition of the six basic prototype expressions (i.e., joy, surprise, anger, sadness, fear, and disgust) based on Facial Action Coding System (FACS). The approach is attempting to utilize a combination of different transforms (Walid let hybrid transform); they consist of Fast Fourier Transform; Radon transform and Multiwavelet transform for the feature extraction. Korhonen Self Organizing Feature Map (SOFM) then used for patterns clustering based on the features obtained from the hybrid transform above. The result shows that the method has very good accuracy in facial expression recognition. However, the proposed method has many promising features that make it interesting. The approach provides a new method of feature extraction in which overcome the problem of the illumination, faces that varies from one individual to another quite considerably due to different age, ethnicity, gender and cosmetic also it does not require a precise normalization and lighting equalization. An average clustering accuracy of 94.8% is achieved for six basic expressions, where different databases had been used for the test of the method.


2021 ◽  
Vol 2021 ◽  
pp. 1-20
Author(s):  
Ebenezer Owusu ◽  
Jacqueline Asor Kumi ◽  
Justice Kwame Appati

Facial expression is an important form of nonverbal communication, as it is noted that 55% of what humans communicate is expressed in facial expressions. There are several applications of facial expressions in diverse fields including medicine, security, gaming, and even business enterprises. Thus, currently, automatic facial expression recognition is a hotbed research area that attracts lots of grants and therefore the need to understand the trends very well. This study, as a result, aims to review selected published works in the domain of study and conduct valuable analysis to determine the most common and useful algorithms employed in the study. We selected published works from 2010 to 2021 and extracted, analyzed, and summarized the findings based on the most used techniques in feature extraction, feature selection, validation, databases, and classification. The result of the study indicates strongly that local binary pattern (LBP), principal component analysis (PCA), saturated vector machine (SVM), CK+, and 10-fold cross-validation are the most widely used feature extraction, feature selection, classifier, database, and validation method used, respectively. Therefore, in line with our findings, this study provides recommendations for research specifically for new researchers with little or no background as to which methods they can employ and strive to improve.


2021 ◽  
Vol 03 (02) ◽  
pp. 204-208
Author(s):  
Ielaf O. Abdul-Majjed DAHL

In the past decade, the field of facial expression recognition has attracted the attention of scientists who play an important role in enhancing interaction between human and computers. The issue of facial expression recognition is not a simple matter of machine learning, because expression of the individual differs from one person to another based on the various contexts, backgrounds and lighting. The goal of the current system was to achieve the highest rate for two facial expressions ("happy" and "sad") The objective of the current work was to attain the highest rate in classification with computer vision algorithms for two facial expressions ("happy" and "sad"). This was accomplished through several phases started from image pre-processing to the Gabor filter extraction, which was then used for the extraction of important characteristics with mutual information. The expression was finally recognized by a support vector classifier. Cohn-Kanade database and JAFFE data base have been trained and checked. The rates achieved by the qualified data package were 81.09% and 92.85% respectively.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 324 ◽  
Author(s):  
Ridha Bendjillali ◽  
Mohammed Beladgham ◽  
Khaled Merit ◽  
Abdelmalik Taleb-Ahmed

Facial expression recognition (FER) has become one of the most important fields of research in pattern recognition. In this paper, we propose a method for the identification of facial expressions of people through their emotions. Being robust against illumination changes, this method combines four steps: Viola–Jones face detection algorithm, facial image enhancement using contrast limited adaptive histogram equalization (CLAHE) algorithm, the discrete wavelet transform (DWT), and deep convolutional neural network (CNN). We have used Viola–Jones to locate the face and facial parts; the facial image is enhanced using CLAHE; then facial features extraction is done using DWT; and finally, the extracted features are used directly to train the CNN network, for the purpose of classifying the facial expressions. Our experimental work was performed on the CK+ database and JAFFE face database. The results obtained using this network were 96.46% and 98.43%, respectively.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Yusra Khalid Bhatti ◽  
Afshan Jamil ◽  
Nudrat Nida ◽  
Muhammad Haroon Yousaf ◽  
Serestina Viriri ◽  
...  

Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall.


2021 ◽  
Vol 9 (5) ◽  
pp. 1141-1152
Author(s):  
Muazu Abdulwakil Auma ◽  
Eric Manzi ◽  
Jibril Aminu

Facial recognition is integral and essential in todays society, and the recognition of emotions based on facial expressions is already becoming more usual. This paper analytically provides an overview of the databases of video data of facial expressions and several approaches to recognizing emotions by facial expressions by including the three main image analysis stages, which are pre-processing, feature extraction, and classification. The paper presents approaches based on deep learning using deep neural networks and traditional means to recognizing human emotions based on visual facial features. The current results of some existing algorithms are presented. When reviewing scientific and technical literature, the focus was mainly on sources containing theoretical and research information of the methods under consideration and comparing traditional techniques and methods based on deep neural networks supported by experimental research. An analysis of scientific and technical literature describing methods and algorithms for analyzing and recognizing facial expressions and world scientific research results has shown that traditional methods of classifying facial expressions are inferior in speed and accuracy to artificial neural networks. This reviews main contributions provide a general understanding of modern approaches to facial expression recognition, which will allow new researchers to understand the main components and trends in facial expression recognition. A comparison of world scientific research results has shown that the combination of traditional approaches and approaches based on deep neural networks show better classification accuracy. However, the best classification methods are artificial neural networks.


Sign in / Sign up

Export Citation Format

Share Document