scholarly journals Performance Improving of ANN with Preprocessing Stage in Human Face Expression Recognition System

In many face recognition systems, the important part is face detection. The task of detecting face is complex due to its variability present across human faces including color, pose, expression, position, and orientation. So, by using various modeling techniques it is convenient to recognize various facial expressions. The system proposed consists of three phases, the facial expression database, pre-processing and classification. To simulate and assess recognition efficiency based on different variables (network composition, learning patterns and pre-processing), we present both the Japanese Female Facial Expression Database (JAFFE) and the Extended Cohn-Kanade Dataset (CK+). Comparative approaches of data preprocessing include face detection, translation, normalization of global contrast and histogram equalization. Significant results were obtained with 85.52 percent accuracy particularly in comparison with some other pre-processing phases and raw data in single pre-processing phases. The result indicates the ANN classifier representation produces a satisfactory result which reaches more accuracy.

2019 ◽  
Vol 8 (2S11) ◽  
pp. 4047-4051

The automatic detection of facial expressions is an active research topic, since its wide fields of applications in human-computer interaction, games, security or education. However, the latest studies have been made in controlled laboratory environments, which is not according to real world scenarios. For that reason, a real time Facial Expression Recognition System (FERS) is proposed in this paper, in which a deep learning approach is applied to enhance the detection of six basic emotions: happiness, sadness, anger, disgust, fear and surprise in a real-time video streaming. This system is composed of three main components: face detection, face preparation and face expression classification. The results of proposed FERS achieve a 65% of accuracy, trained over 35558 face images..


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Faisal Ahmed ◽  
Emam Hossain

Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP)—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK) face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.


2013 ◽  
pp. 1434-1460
Author(s):  
Ong Chin Ann ◽  
Marlene Valerie Lu ◽  
Lau Bee Theng

The main purpose of this research is to enhance the communication of the disabled community. The authors of this chapter propose an enhanced interpersonal-human interaction for people with special needs, especially those with physical and communication disabilities. The proposed model comprises of automated real time behaviour monitoring, designed and implemented with the ubiquitous and affordable concept in mind to suit the underprivileged. In this chapter, the authors present the prototype which encapsulates an automated facial expression recognition system for monitoring the disabled, equipped with a feature to send Short Messaging System (SMS) for notification purposes. The authors adapted the Viola-Jones face detection algorithm at the face detection stage and implemented template matching technique for the expression classification and recognition stage. They tested their model with a few users and achieved satisfactory results. The enhanced real time behaviour monitoring system is an assistive tool to improve the quality of life for the disabled by assisting them anytime and anywhere when needed. They can do their own tasks more independently without constantly being monitored physically or accompanied by their care takers, teachers, or even parents. The rest of this chapter is organized as follows. The background of the facial expression recognition system is reviewed in Section 2. Section 3 is the description and explanations of the conceptual model of facial expression recognition. Evaluation of the proposed system is in Section 4. Results and findings on the testing are laid out in Section 5, and the final section concludes the chapter.


Author(s):  
Ong Chin Ann ◽  
Marlene Valerie Lu ◽  
Lau Bee Theng

The main purpose of this research is to enhance the communication of the disabled community. The authors of this chapter propose an enhanced interpersonal-human interaction for people with special needs, especially those with physical and communication disabilities. The proposed model comprises of automated real time behaviour monitoring, designed and implemented with the ubiquitous and affordable concept in mind to suit the underprivileged. In this chapter, the authors present the prototype which encapsulates an automated facial expression recognition system for monitoring the disabled, equipped with a feature to send Short Messaging System (SMS) for notification purposes. The authors adapted the Viola-Jones face detection algorithm at the face detection stage and implemented template matching technique for the expression classification and recognition stage. They tested their model with a few users and achieved satisfactory results. The enhanced real time behaviour monitoring system is an assistive tool to improve the quality of life for the disabled by assisting them anytime and anywhere when needed. They can do their own tasks more independently without constantly being monitored physically or accompanied by their care takers, teachers, or even parents. The rest of this chapter is organized as follows. The background of the facial expression recognition system is reviewed in Section 2. Section 3 is the description and explanations of the conceptual model of facial expression recognition. Evaluation of the proposed system is in Section 4. Results and findings on the testing are laid out in Section 5, and the final section concludes the chapter.


2019 ◽  
Vol 16 (9) ◽  
pp. 3778-3782 ◽  
Author(s):  
Mamta Santosh ◽  
Avinash Sharma

Facial Expression Recognition has become the preliminary research area due to its importance in human-computer interaction. Facial Expressions conveys the major part of information so it has vast applications in various fields. Many techniques have been developed in the literature but there is still a need to make the current expression recognition methods efficient. This paper represents proposed framework for face detection and recognizing six universal facial expressions such as happy, anger, disgust, fear, surprise, sad along with neutral face. Viola-Jones method and Face Landmark Detection method are used for face detection. Histogram of oriented gradients is used for feature extraction due to its superiority over other methods. To reduce the dimensionality of features Principal Component Analysis is used so that the maximum variation is preserved. Canberra distance classifier is used for classifying the expressions into different emotions. The proposed method is applied on Japanese Female Facial Expression Database and have evaluated that the proposed method outperforms many state-of-the-art techniques.


Sign in / Sign up

Export Citation Format

Share Document