3D facial expression modeling based on facial landmarks in single image

2019 ◽  
Vol 355 ◽  
pp. 155-167 ◽  
Author(s):  
Chenlei Lv ◽  
Zhongke Wu ◽  
Xingce Wang ◽  
Mingquan Zhou
2021 ◽  
Vol 11 (16) ◽  
pp. 7217
Author(s):  
Cristina Luna-Jiménez ◽  
Jorge Cristóbal-Martín ◽  
Ricardo Kleinlein ◽  
Manuel Gil-Martín ◽  
José M. Moya ◽  
...  

Spatial Transformer Networks are considered a powerful algorithm to learn the main areas of an image, but still, they could be more efficient by receiving images with embedded expert knowledge. This paper aims to improve the performance of conventional Spatial Transformers when applied to Facial Expression Recognition. Based on the Spatial Transformers’ capacity of spatial manipulation within networks, we propose different extensions to these models where effective attentional regions are captured employing facial landmarks or facial visual saliency maps. This specific attentional information is then hardcoded to guide the Spatial Transformers to learn the spatial transformations that best fit the proposed regions for better recognition results. For this study, we use two datasets: AffectNet and FER-2013. For AffectNet, we achieve a 0.35% point absolute improvement relative to the traditional Spatial Transformer, whereas for FER-2013, our solution gets an increase of 1.49% when models are fine-tuned with the Affectnet pre-trained weights.


2016 ◽  
Vol 84 ◽  
pp. 94-98 ◽  
Author(s):  
Priya Saha ◽  
Debotosh Bhattacharjee ◽  
Barin Kumar De ◽  
Mita Nasipuri

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2578
Author(s):  
Yu-Jin Hong ◽  
Sung Eun Choi ◽  
Gi Pyo Nam ◽  
Heeseung Choi ◽  
Junghyun Cho ◽  
...  

Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning.


Sign in / Sign up

Export Citation Format

Share Document