scholarly journals Adaptive 3D Model-Based Facial Expression Synthesis and Pose Frontalization

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2578
Author(s):  
Yu-Jin Hong ◽  
Sung Eun Choi ◽  
Gi Pyo Nam ◽  
Heeseung Choi ◽  
Junghyun Cho ◽  
...  

Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning.

Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


2011 ◽  
pp. 255-317 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

The facial expression has long been an interest for psychology, since Darwin published The expression of Emotions in Man and Animals (Darwin, C., 1899). Psychologists have studied to reveal the role and mechanism of the facial expression. One of the great discoveries of Darwin is that there exist prototypical facial expressions across multiple cultures on the earth, which provided the theoretical backgrounds for the vision researchers who tried to classify categories of the prototypical facial expressions from images. The representative 6 facial expressions are afraid, happy, sad, surprised, angry, and disgust (Mase, 1991; Yacoob and Davis, 1994). On the other hand, real facial expressions that we frequently meet in daily life consist of lots of distinct signals, which are subtly different. Further research on facial expressions required an object method to describe and measure the distinct activity of facial muscles. The facial action coding system (FACS), proposed by Hager and Ekman (1978), defines 46 distinct action units (AUs), each of which explains the activity of each distinct muscle or muscle group. The development of the objective description method also affected the vision researchers, who tried to detect the emergence of each AU (Tian et. al., 2001).


2016 ◽  
Vol 84 ◽  
pp. 94-98 ◽  
Author(s):  
Priya Saha ◽  
Debotosh Bhattacharjee ◽  
Barin Kumar De ◽  
Mita Nasipuri

2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878315 ◽  
Author(s):  
Nicole Lazzeri ◽  
Daniele Mazzei ◽  
Maher Ben Moussa ◽  
Nadia Magnenat-Thalmann ◽  
Danilo De Rossi

Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Mohammed Hazim Alkawaz ◽  
Ahmad Hoirul Basori ◽  
Dzulkifli Mohamad ◽  
Farhan Mohamed

Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.


2020 ◽  
pp. 59-69
Author(s):  
Walid Mahmod ◽  
Jane Stephan ◽  
Anmar Razzak

Automatic analysis of facial expressions is rapidly becoming an area of intense interest in computer vision and artificial intelligence research communities. In this paper an approach is presented for facial expression recognition of the six basic prototype expressions (i.e., joy, surprise, anger, sadness, fear, and disgust) based on Facial Action Coding System (FACS). The approach is attempting to utilize a combination of different transforms (Walid let hybrid transform); they consist of Fast Fourier Transform; Radon transform and Multiwavelet transform for the feature extraction. Korhonen Self Organizing Feature Map (SOFM) then used for patterns clustering based on the features obtained from the hybrid transform above. The result shows that the method has very good accuracy in facial expression recognition. However, the proposed method has many promising features that make it interesting. The approach provides a new method of feature extraction in which overcome the problem of the illumination, faces that varies from one individual to another quite considerably due to different age, ethnicity, gender and cosmetic also it does not require a precise normalization and lighting equalization. An average clustering accuracy of 94.8% is achieved for six basic expressions, where different databases had been used for the test of the method.


Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


2016 ◽  
Vol 33 (S1) ◽  
pp. S596-S596 ◽  
Author(s):  
F. Amico ◽  
G. Healy ◽  
M. Arvaneh ◽  
D. Kearney ◽  
E. Mohedano ◽  
...  

Facial expression is an independent and objective marker of affect. Basic emotions (fear, sadness, joy, anger, disgust and surprise) have been shown to be universal across human cultures. Techniques such as the Facial Action Coding System can capture emotion with good reliability. Such techniques visually process the changes in different assemblies of facial muscles that produce the facial expression of affect.Recent groundbreaking advances in computing and facial expression analysis software now allow real-time and objective measurement of emotional states. In particular, a recently developed software package and equipment, the Imotion Attention Tool™, allows capturing information on discreet emotional states based on facial expressions while a subject is participating in a behavioural task.Extending preliminary work by further experimentation and analysis, the present findings suggests a link between facial affect data to already established peripheral arousal measures such as event related potentials (ERP), heart rate variability (HRV) and galvanic skin response (GSR) using disruptively innovative, noninvasive and clinically applicable technology in patients reporting suicidal ideation and intent compared to controls. Our results hold promise for the establishment of a computerized diagnostic battery that can be utilized by clinicians to improve the evaluation of suicide risk.Disclosure of interestThe authors have not supplied their declaration of competing interest.


Author(s):  
Michel Valstar ◽  
Stefanos Zafeiriou ◽  
Maja Pantic

Automatic Facial Expression Analysis systems have come a long way since the earliest approaches in the early 1970s. We are now at a point where the first systems are commercially applied, most notably smile detectors included in digital cameras. As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has received significant and sustained attention within the field. Over the past 30 years, psychologists and neuroscientists have conducted extensive research on various aspects of human behaviour using facial expression analysis coded in terms of FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Mainly due to the cost effectiveness of existing recording equipment, until recently almost all work conducted in this area involves 2D imagery, despite their inherent problems relating to pose and illumination variations. In order to deal with these problems, 3D recordings are increasingly used in expression analysis research. In this chapter, the authors give an overview of 2D and 3D FACS recognition, and summarise current challenges and opportunities.


2015 ◽  
Vol 18 ◽  
Author(s):  
María Verónica Romero-Ferreiro ◽  
Luis Aguado ◽  
Javier Rodriguez-Torresano ◽  
Tomás Palomo ◽  
Roberto Rodriguez-Jimenez

AbstractDeficits in facial affect recognition have been repeatedly reported in schizophrenia patients. The hypothesis that this deficit is caused by poorly differentiated cognitive representation of facial expressions was tested in this study. To this end, performance of patients with schizophrenia and controls was compared in a new emotion-rating task. This novel approach allowed the participants to rate each facial expression at different times in terms of different emotion labels. Results revealed that patients tended to give higher ratings to emotion labels that did not correspond to the portrayed emotion, especially in the case of negative facial expressions (p < .001, η2 = .131). Although patients and controls gave similar ratings when the emotion label matched with the facial expression, patients gave higher ratings on trials with "incorrect" emotion labels (ps < .05). Comparison of patients and controls in a summary index of expressive ambiguity showed that patients perceived angry, fearful and happy faces as more emotionally ambiguous than did the controls (p < .001, η2 = .135). These results are consistent with the idea that the cognitive representation of emotional expressions in schizophrenia is characterized by less clear boundaries and a less close correspondence between facial configurations and emotional states.


Sign in / Sign up

Export Citation Format

Share Document