scholarly journals Automatic Multiface Expression Recognition Using Convolutional Neural Network

Author(s):  
Padmapriya K.C. ◽  
Leelavathy V. ◽  
Angelin Gladston

The human facial expressions convey a lot of information visually. Facial expression recognition plays a crucial role in the area of human-machine interaction. Automatic facial expression recognition system has many applications in human behavior understanding, detection of mental disorders and synthetic human expressions. Recognition of facial expression by computer with high recognition rate is still a challenging task. Most of the methods utilized in the literature for the automatic facial expression recognition systems are based on geometry and appearance. Facial expression recognition is usually performed in four stages consisting of pre-processing, face detection, feature extraction, and expression classification. In this paper we applied various deep learning methods to classify the seven key human emotions: anger, disgust, fear, happiness, sadness, surprise and neutrality. The facial expression recognition system developed is experimentally evaluated with FER dataset and has resulted with good accuracy.

Author(s):  
Fadi Dornaika ◽  
Bogdan Raducanu

Facial expression plays an important role in cognition of human emotions (Fasel, 2003 & Yeasin, 2006). The recognition of facial expressions in image sequences with significant head movement is a challenging problem. It is required by many applications such as human-computer interaction and computer graphics animation (Cañamero, 2005 & Picard, 2001). To classify expressions in still images many techniques have been proposed such as Neural Nets (Tian, 2001), Gabor wavelets (Bartlett, 2004), and active appearance models (Sung, 2006). Recently, more attention has been given to modeling facial deformation in dynamic scenarios. Still image classifiers use feature vectors related to a single frame to perform classification. Temporal classifiers try to capture the temporal pattern in the sequence of feature vectors related to each frame such as the Hidden Markov Model based methods (Cohen, 2003, Black, 1997 & Rabiner, 1989) and Dynamic Bayesian Networks (Zhang, 2005). The main contributions of the paper are as follows. First, we propose an efficient recognition scheme based on the detection of keyframes in videos where the recognition is performed using a temporal classifier. Second, we use the proposed method for extending the human-machine interaction functionality of a robot whose response is generated according to the user’s recognized facial expression. Our proposed approach has several advantages. First, unlike most expression recognition systems that require a frontal view of the face, our system is viewand texture-independent. Second, its learning phase is simple compared to other techniques (e.g., the Hidden Markov Models and Active Appearance Models), that is, we only need to fit second-order Auto-Regressive models to sequences of facial actions. As a result, even when the imaging conditions change the learned Auto-Regressive models need not to be recomputed. The rest of the paper is organized as follows. Section 2 summarizes our developed appearance-based 3D face tracker that we use to track the 3D head pose as well as the facial actions. Section 3 describes the proposed facial expression recognition based on the detection of keyframes. Section 4 provides some experimental results. Section 5 describes the proposed human-machine interaction application that is based on the developed facial expression recognition scheme.


Author(s):  
YU-YI LIAO ◽  
JZAU-SHENG LIN ◽  
SHEN-CHUAN TAI

In this paper, a facial expression recognition system based on cerebella model articulation controller with a clustering memory (CMAC-CM) is presented. Firstly, the facial expression features were automatically preprocessed and extracted from given still images in the JAFFE database in which the frontal view of faces were contained. Next, a block of lower frequency DCT coefficients was obtained by subtracting a neutral image from a given expression image and rearranged as input vectors to be fed into the CMAC-CM that can rapidly obtain output using nonlinear mapping with a look-up table in training or recognizing phase. Finally, the experimental results have demonstrated recognition rates with various block sizes of coefficients in lower frequency and cluster sizes of weight memory. A mean recognition rate of 92.86% is achieved for the testing images. CMAC-CM takes 0.028 seconds for test image in testing phase.


2011 ◽  
Vol 121-126 ◽  
pp. 617-621 ◽  
Author(s):  
Chang Yi Kao ◽  
Chin Shyurng Fahn

During the development of the facial expression classification procedure, we evaluate three machine learning methods. We combine ABAs with CARTs, which selects weak classifiers and integrates them into a strong classifier automatically. We have presented a highly automatic facial expression recognition system in which a face detection procedure is first able to detect and locate human faces in image sequences acquired in real environments. We need not label or choose characteristic blocks in advance. In the face detection procedure, some geometrical properties are applied to eliminate the skin color regions that do not belong to human faces. In the facial feature extraction procedure, we only perform both the binarization and edge detection operations on the proper ranges of eyes, mouth, and eyebrows to obtain the 16 landmarks of a human face to further produce 16 characteristic distances which represent a kind of expressions. We realize a facial expression classification procedure by employing an ABA to recognize six kinds of expressions. The performance of the system is very satisfactory; whose recognition rate achieves more than 90%.


Author(s):  
Hai-Duong Nguyen ◽  
Soonja Yeom ◽  
Guee-Sang Lee ◽  
Hyung-Jeong Yang ◽  
In-Seop Na ◽  
...  

Emotion recognition plays an indispensable role in human–machine interaction system. The process includes finding interesting facial regions in images and classifying them into one of seven classes: angry, disgust, fear, happy, neutral, sad, and surprise. Although many breakthroughs have been made in image classification, especially in facial expression recognition, this research area is still challenging in terms of wild sampling environment. In this paper, we used multi-level features in a convolutional neural network for facial expression recognition. Based on our observations, we introduced various network connections to improve the classification task. By combining the proposed network connections, our method achieved competitive results compared to state-of-the-art methods on the FER2013 dataset.


2018 ◽  
Vol 7 (2.24) ◽  
pp. 348
Author(s):  
Neha . ◽  
Pratistha Mathur

The area of computer vision and machine learning for pattern recognition has witnessed the need for research for the development of algorithms for different applications such as human-computer interaction, automated access control and surveillance. In the field of computer vision Facial Expression Recognition has attracted the researcher’s interest. This paper presents a novel feature extraction technique: Gabor-Average-DWT-DCT for automatic facial expression recognition from a person's face image invariant of illumination. Facial Emotions have different edge and texture pattern. Gabor filter is able to extract edges and texture pattern of faces but with problem of huge dimension and high redundancy. The problem of huge dimension and high redundancy is reduced by proposed Average-DWT-DCT feature reduction technique in order to increase accuracy of system. Proposed Gabor- Average -DWT-DCT provides a compact feature vector for reducing response time of system compared to existing Gabor based expression classification. Detailed quantitative analysis is done and results that the average recognition rate of proposed technique is better than state of art results.  


Author(s):  
Kanaparthi Snehitha

Artificial intelligence technology has been trying to bridge the gap between humans and machines. The latest development in this technology is Facial recognition. Facial recognition technology identifies the faces by co-relating and verifying the patterns of facial contours. Facial recognition is done by using Viola-Jones object detection framework. Facial expression is one of the important aspects in recognizing human emotions. Facial expression also helps to determine interpersonal relation between humans. Automatic facial recognition is now being used very widely in almost every field, like marketing, health care, behavioral analysis and also in human-machine interaction. Facial expression recognition helps a lot more than facial recognition. It helps the retailers to understand their customers, doctors to understand their patients, and organizations to understand their clients. For the expression recognition, we are using the landmarks of face which are appearance-based features. With the use of an active shape model, LBP (Local Binary Patterns) derives its properties from face landmarks. The operation is carried out by taking into account pixel values, which improves the rate of expression recognition. In an experiment done using previous methods and 10-fold cross validation, the accuracy achieved is 89.71%. CK+ Database is used to achieve this result.


Sign in / Sign up

Export Citation Format

Share Document