Dynamic 3D facial expression modeling using Laplacian smooth and multi-scale mesh matching

2014 ◽  
Vol 30 (6-8) ◽  
pp. 649-659 ◽  
Author(s):  
Jing Chi ◽  
Changhe Tu ◽  
Caiming Zhang
2018 ◽  
Vol 21 (4) ◽  
pp. 287 ◽  
Author(s):  
Xiaofeng Liu ◽  
Bin Hu ◽  
Xiangwei Zheng ◽  
Xiaowei Li

2016 ◽  
Vol 84 ◽  
pp. 94-98 ◽  
Author(s):  
Priya Saha ◽  
Debotosh Bhattacharjee ◽  
Barin Kumar De ◽  
Mita Nasipuri

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2578
Author(s):  
Yu-Jin Hong ◽  
Sung Eun Choi ◽  
Gi Pyo Nam ◽  
Heeseung Choi ◽  
Junghyun Cho ◽  
...  

Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning.


2014 ◽  
Vol 511-512 ◽  
pp. 437-440
Author(s):  
Xiao Xiao Xia ◽  
Zi Lu Ying ◽  
Wen Jin Chu

A new method based on Monogenic Binary Coding (MBC) is proposed for facial expression feature extraction and representation. Firstly, monogenic signal analysis is used to extract multi-scale magnitude, orientation and phase components. Secondly, Monogenic Binary Coding (MBC) is used to encode the monogenic local variation and intensity in local regions of each extracted component in each scale and local histograms are built. Then Blocked Fisher Linear Discrimination (BFLD) is used to reduce the dimensionality of histogram features and to enhance discrimination. Finally the three complementary components are fused for more effective facial expression recognition (FER). Experiment results on Japanese female expression database (JAFFE) show that the performance of the fusion method is even better than state-of-the-art local feature based FER methods such as Local Binary Pattern (LBP)+Sparse Representation (SRC), Local Phase Quantization (LPQ)+SRC ,etc.


Sign in / Sign up

Export Citation Format

Share Document