scholarly journals Shape-invariant perceptual encoding of dynamic facial expressions across species

2020 ◽  
Author(s):  
N. Taubert ◽  
M. Stettler ◽  
R. Siebert ◽  
S. Spadacenta ◽  
L. Sting ◽  
...  

AbstractDynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural-network theories predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate this hypothesis, we developed photo-realistic human and monkey heads that were animated with motion-capture data from monkeys and human. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented independently of facial shape. This result supports the co-evolution of the visual processing and motor-control of facial expressions, while it challenges popular neural-network theories of dynamic expression-recognition.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Nick Taubert ◽  
Michael Stettler ◽  
Ramona Siebert ◽  
Silvia Spadacenta ◽  
Louisa Sting ◽  
...  

Dynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural network models predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate these alternative hypotheses, we developed photo-realistic human and monkey heads that were animated with motion capture data from monkeys and humans. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented largely independently of facial shape. This result supports the co-evolution of the visual processing and motor control of facial expressions, while it challenges appearance-based neural network theories of dynamic expression recognition.


2021 ◽  
Vol 10 (2) ◽  
pp. 182-188
Author(s):  
Ajeng Restu Kusumastuti ◽  
Yosi Kristian ◽  
Endang Setyati

Abstract—The Covid-19 pandemic has transformed the offline education system into online. Therefore, in order to maximize the learning process, teachers were forced to adapt by having presentations that attract student's attention, including kindergarten teachers. This is a major problem considering the attention rate of children at early age is very diverse combined with their limited communication skill. Thus, there is a need to identify and classify student's learning interest through facial expressions and gestures during the online session. Through this research, student's learning interest were classified into several classes, validated by the teacher. There are three classes: Interested, Moderately Interested, and Not Interested. Trials to get the classification of student's learning interest by teacher validation, carried out by training and testing the cut area of the center of the face (eyes, mouth, face) to get facial expression recognition, supported by the gesture area as gesture recognition. This research has scenarios of four cut areas and two cut areas that were applied to the interest class that utilizes the weight of transfer learning architectures such as VGG16, ResNet50, and Xception. The results of the learning interest classification test obtained a minimum validation percentage of 70%. The result obtained through scenarios of three learning interest classes four cut areas using VGG16 was 75%, while for two cut areas using ResNet50 was 71%. These results proved that the methods of this research can be used to determine the duration and theme of online kindergarten classes.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


2002 ◽  
Vol 14 (8) ◽  
pp. 1158-1173 ◽  
Author(s):  
Matthew N. Dailey ◽  
Garrison W. Cottrell ◽  
Curtis Padgett ◽  
Ralph Adolphs

There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of “categorical perception.” In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, “surprise” expressions lie between “happiness” and “fear” expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6716
Author(s):  
Silvia Ramis ◽  
Jose Maria Buades ◽  
Francisco J. Perales

In this work an affective computing approach is used to study the human-robot interaction using a social robot to validate facial expressions in the wild. Our global goal is to evaluate that a social robot can be used to interact in a convincing manner with human users to recognize their potential emotions through facial expressions, contextual cues and bio-signals. In particular, this work is focused on analyzing facial expression. A social robot is used to validate a pre-trained convolutional neural network (CNN) which recognizes facial expressions. Facial expression recognition plays an important role in recognizing and understanding human emotion by robots. Robots equipped with expression recognition capabilities can also be a useful tool to get feedback from the users. The designed experiment allows evaluating a trained neural network in facial expressions using a social robot in a real environment. In this paper a comparison between the CNN accuracy and human experts is performed, in addition to analyze the interaction, attention and difficulty to perform a particular expression by 29 non-expert users. In the experiment, the robot leads the users to perform different facial expressions in motivating and entertaining way. At the end of the experiment, the users are quizzed about their experience with the robot. Finally, a set of experts and the CNN classify the expressions. The obtained results allow affirming that the use of social robot is an adequate interaction paradigm for the evaluation on facial expression.


Author(s):  
Chang Liu ◽  
◽  
Kaoru Hirota ◽  
Bo Wang ◽  
Yaping Dai ◽  
...  

An emotion recognition framework based on a two-channel convolutional neural network (CNN) is proposed to detect the affective state of humans through facial expressions. The framework consists of three parts, i.e., the frontal face detection module, the feature extraction module, and the classification module. The feature extraction module contains two channels: one is for raw face images and the other is for texture feature images. The local binary pattern (LBP) images are utilized for texture feature extraction to enrich facial features and improve the network performance. The attention mechanism is adopted in both CNN feature extraction channels to highlight the features that are related to facial expressions. Moreover, arcface loss function is integrated into the proposed network to increase the inter-class distance and decrease the inner-class distance of facial features. The experiments conducted on the two public databases, FER2013 and CK+, demonstrate that the proposed method outperforms the previous methods, with the accuracies of 72.56% and 94.24%, respectively. The improvement in emotion recognition accuracy makes our approach applicable to service robots.


Author(s):  
Zaenal Abidin ◽  
Agus Harjoko

Abstract— In daily lives, especially in interpersonal communication, face often used for expression. Facial expressions give information about the emotional state of the person. A facial expression is one of the behavioral characteristics. The components of a basic facial expression analysis system are face detection, face data extraction, and facial expression recognition. Fisherface method with backpropagation artificial neural network approach can be used for facial expression recognition. This method consists of two-stage process, namely PCA and LDA. PCA is used to reduce the dimension, while the LDA is used for features extraction of facial expressions. The system was tested with 2 databases namely JAFFE database and MUG database. The system correctly classified the expression with accuracy of 86.85%, and false positive 25 for image type I of JAFFE, for image type II of JAFFE 89.20% and false positive 15,  for type III of JAFFE 87.79%, and false positive for 16. The image of MUG are 98.09%, and false positive 5.Keywords— facial expression, fisherface method, PCA, LDA, backpropagation neural network.


2020 ◽  
Vol 8 (2) ◽  
pp. 68-84
Author(s):  
Naoki Imamura ◽  
Hiroki Nomiya ◽  
Teruhisa Hochin

Facial expression intensity has been proposed to digitize the degree of facial expressions in order to retrieve impressive scenes from lifelog videos. The intensity is calculated based on the correlation of facial features compared to each facial expression. However, the correlation is not determined objectively. It should be determined statistically based on the contribution score of the facial features necessary for expression recognition. Therefore, the proposed method recognizes facial expressions by using a neural network and calculates the contribution score of input toward the output. First, the authors improve some facial features. After that, they verify the score correctly by comparing the accuracy transitions depending on reducing useful and useless features and process the score statistically. As a result, they extract useful facial features from the neural network.


Information ◽  
2019 ◽  
Vol 10 (12) ◽  
pp. 375 ◽  
Author(s):  
Yingying Wang ◽  
Yibin Li ◽  
Yong Song ◽  
Xuewen Rong

As an important part of emotion research, facial expression recognition is a necessary requirement in human–machine interface. Generally, a face expression recognition system includes face detection, feature extraction, and feature classification. Although great success has been made by the traditional machine learning methods, most of them have complex computational problems and lack the ability to extract comprehensive and abstract features. Deep learning-based methods can realize a higher recognition rate for facial expressions, but a large number of training samples and tuning parameters are needed, and the hardware requirement is very high. For the above problems, this paper proposes a method combining features that extracted by the convolutional neural network (CNN) with the C4.5 classifier to recognize facial expressions, which not only can address the incompleteness of handcrafted features but also can avoid the high hardware configuration in the deep learning model. Considering some problems of overfitting and weak generalization ability of the single classifier, random forest is applied in this paper. Meanwhile, this paper makes some improvements for C4.5 classifier and the traditional random forest in the process of experiments. A large number of experiments have proved the effectiveness and feasibility of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document