scholarly journals 3D Approaches and Challenges in Facial Expression Recognition Algorithms—A Literature Review

2019 ◽  
Vol 9 (18) ◽  
pp. 3904 ◽  
Author(s):  
Francesca Nonis ◽  
Nicole Dagnes ◽  
Federica Marcolin ◽  
Enrico Vezzetti

In recent years, facial expression analysis and recognition (FER) have emerged as an active research topic with applications in several different areas, including the human-computer interaction domain. Solutions based on 2D models are not entirely satisfactory for real-world applications, as they present some problems of pose variations and illumination related to the nature of the data. Thanks to technological development, 3D facial data, both still images and video sequences, have become increasingly used to improve the accuracy of FER systems. Despite the advance in 3D algorithms, these solutions still have some drawbacks that make pure three-dimensional techniques convenient only for a set of specific applications; a viable solution to overcome such limitations is adopting a multimodal 2D+3D analysis. In this paper, we analyze the limits and strengths of traditional and deep-learning FER techniques, intending to provide the research community an overview of the results obtained looking to the next future. Furthermore, we describe in detail the most used databases to address the problem of facial expressions and emotions, highlighting the results obtained by the various authors. The different techniques used are compared, and some conclusions are drawn concerning the best recognition rates achieved.

Author(s):  
Jacey-Lynn Minoi ◽  
Duncan Gillies

The aim of this chapter is to identify those face areas containing high facial expression information, which may be useful for facial expression analysis, face and facial expression recognition and synthesis. In the study of facial expression analysis, landmarks are usually placed on well-defined craniofacial features. In this experiment, the authors have selected a set of landmarks based on craniofacial anthropometry and associate each of the landmarks with facial muscles and the Facial Action Coding System (FACS) framework, which means to locate landmarks on less palpable areas that contain high facial expression mobility. The selected landmarks are statistically analysed in terms of facial muscles motion based on FACS. Given that human faces provide information to channel verbal and non-verbal communication: speech, facial expression of emotions, gestures, and other human communicative actions; hence, these cues may be significant in the identification of expressions such as pain, agony, anger, happiness, et cetera. Here, the authors describe the potential of computer-based models of three-dimensional (3D) facial expression analysis and the non-verbal communication recognition to assist in biometric recognition and clinical diagnosis.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4047-4051

The automatic detection of facial expressions is an active research topic, since its wide fields of applications in human-computer interaction, games, security or education. However, the latest studies have been made in controlled laboratory environments, which is not according to real world scenarios. For that reason, a real time Facial Expression Recognition System (FERS) is proposed in this paper, in which a deep learning approach is applied to enhance the detection of six basic emotions: happiness, sadness, anger, disgust, fear and surprise in a real-time video streaming. This system is composed of three main components: face detection, face preparation and face expression classification. The results of proposed FERS achieve a 65% of accuracy, trained over 35558 face images..


Author(s):  
Abdolhossein Sarrafzadeh ◽  
Samuel T.V. Alexander ◽  
Jamshid Shanbehzadeh

Intelligent tutoring systems (ITS) are still not as effective as one-on-one human tutoring. The next generation of intelligent tutors are expected to be able to take into account the emotional state of students. This paper presents research on the development of an Affective Tutoring System (ATS). The system called “Easy with Eve” adapts to students via a lifelike animated agent who is able to detect student emotion through facial expression analysis, and can display emotion herself. Eve’s adaptations are guided by a case-based method for adapting to student states; this method uses data that was generated by an observational study of human tutors. This paper presents an analysis of facial expressions of students engaged in learning with human tutors and how a facial expression recognition system, a life like agent and a case based system based on this analysis have been integrated to develop an ATS for mathematics.


2019 ◽  
Vol 9 (11) ◽  
pp. 2218 ◽  
Author(s):  
Maria Grazia Violante ◽  
Federica Marcolin ◽  
Enrico Vezzetti ◽  
Luca Ulrich ◽  
Gianluca Billia ◽  
...  

This study proposes a novel quality function deployment (QFD) design methodology based on customers’ emotions conveyed by facial expressions. The current advances in pattern recognition related to face recognition techniques have fostered the cross-fertilization and pollination between this context and other fields, such as product design and human-computer interaction. In particular, the current technologies for monitoring human emotions have supported the birth of advanced emotional design techniques, whose main focus is to convey users’ emotional feedback into the design of novel products. As quality functional deployment aims at transforming the voice of customers into engineering features of a product, it appears to be an appropriate and promising nest in which to embed users’ emotional feedback with new emotional design methodologies, such as facial expression recognition. This way, the present methodology consists in interviewing the user and acquiring his/her face with a depth camera (allowing three-dimensional (3D) data), clustering the face information into different emotions with a support vector machine classificator, and assigning customers’ needs weights relying on the detected facial expressions. The proposed method has been applied to a case study in the context of agriculture and validated by a consortium. The approach appears sound and capable of collecting the unconscious feedback of the interviewee.


Author(s):  
YU-YI LIAO ◽  
JZAU-SHENG LIN ◽  
SHEN-CHUAN TAI

In this paper, a facial expression recognition system based on cerebella model articulation controller with a clustering memory (CMAC-CM) is presented. Firstly, the facial expression features were automatically preprocessed and extracted from given still images in the JAFFE database in which the frontal view of faces were contained. Next, a block of lower frequency DCT coefficients was obtained by subtracting a neutral image from a given expression image and rearranged as input vectors to be fed into the CMAC-CM that can rapidly obtain output using nonlinear mapping with a look-up table in training or recognizing phase. Finally, the experimental results have demonstrated recognition rates with various block sizes of coefficients in lower frequency and cluster sizes of weight memory. A mean recognition rate of 92.86% is achieved for the testing images. CMAC-CM takes 0.028 seconds for test image in testing phase.


Author(s):  
J. F. COHN ◽  
K. L. SCHMIDT

Almost all work in automatic facial expression analysis has focused on recognition of prototypic expressions rather than dynamic changes in appearance over time. To investigate the relative contribution of dynamic features to expression recognition, we used automatic feature tracking to measure the relation between amplitude and duration of smile onsets in spontaneous and deliberate smiles of 81 young adults of Euro- and African-American background. Spontaneous smiles were of smaller amplitude and had a larger and more consistent relation between amplitude and duration than deliberate smiles. A linear discriminant classifier using timing and amplitude measures of smile onsets achieved a 93% recognition rate. Using timing measures alone, recognition rate declined only marginally to 89%. These findings suggest that by extracting and representing dynamic as well as morphological features, automatic facial expression analysis can begin to discriminate among the message values of morphologically similar expressions.


Human feelings are mental conditions of sentiments that emerge immediately as opposed to cognitive exertion. Some of the basic feelings are happy, angry, neutral, sad and surprise. These internal feelings of a person are reflected on the face as Facial Expressions. This paper presents a novel methodology for Facial Expression Analysis which will aid to develop a facial expression recognition system. This system can be used in real time to classify five basic emotions. The recognition of facial expressions is important because of its applications in many domains such as artificial intelligence, security and robotics. Many different approaches can be used to overcome the problems of Facial Expression Recognition (FER) but the best suited technique for automated FER is Convolutional Neural Networks(CNN). Thus, a novel CNN architecture is proposed and a combination of multiple datasets such as FER2013, FER+, JAFFE and CK+ is used for training and testing. This helps to improve the accuracy and develop a robust real time system. The proposed methodology confers quite good results and the obtained accuracy may give encouragement and offer support to researchers to build better models for Automated Facial Expression Recognition systems.


Author(s):  
Mahima Agrawal ◽  
Shubangi. D. Giripunje ◽  
P. R. Bajaj

This paper presents an efficient method of recognition of facial expressions in a video. The works proposes highly efficient facial expression recognition system using PCA optimized by Genetic Algorithm .Reduced computational time and comparable efficiency in terms of its ability to recognize correctly are the benchmarks of this work. Video sequences contain more information than still images hence are in the research subject now-a-days and have much more activities during the expression actions. We use PCA, a statistical method to reduce the dimensionality and are used to extract features with the help of covariance analysis to generate Eigen –components of the images. The Eigen-components as a feature input is optimized by Genetic algorithm to reduce the computation cost.


Author(s):  
Christoph Bartneck ◽  
Michael J. Lyons

The human face plays a central role in most forms of natural human interaction so we may expect that computational methods for analysis of facial information, modeling of internal emotional states, and methods for graphical synthesis of faces and facial expressions will play a growing role in human-computer and human-robot interaction. However, certain areas of face-based HCI, such as facial expression recognition and robotic facial display have lagged others, such as eye-gaze tracking, facial recognition, and conversational characters. Our goal in this paper is to review the situation in HCI with regards to the human face, and to discuss strategies, which could bring more slowly developing areas up to speed. In particular, we are proposing the “The Art of the Soluble” as a strategy forward and provide examples that successfully applied this strategy.


2013 ◽  
Vol 380-384 ◽  
pp. 4057-4060
Author(s):  
Lang Guo ◽  
Jian Wang

Analyzing the defects of two-dimensional facial expression recognition algorithm, this paper proposes a new three-dimensional facial expression recognition algorithm. The algorithm is tested in JAFFE facial expression database. The results show that the proposed algorithm dynamically determines the size of the local neighborhood according to the manifold structure, effectively solves the problem of facial expression recognition, and has good recognition rate.


Sign in / Sign up

Export Citation Format

Share Document