A 3D face animation system for mobile devices

2014 ◽  
Vol 26 (1) ◽  
pp. 11-18 ◽  
Author(s):  
Engin Mendi
2011 ◽  
pp. 317-340
Author(s):  
Zhen Wen ◽  
Pengyu Hong ◽  
Jilin Tu ◽  
Thomas S. Huang

This chapter presents a unified framework for machine-learning-based facial deformation modeling, analysis and synthesis. It enables flexible, robust face motion analysis and natural synthesis, based on a compact face motion model learned from motion capture data. This model, called Motion Units (Muss), captures the characteristics of real facial motion. The MU space can be used to constrain noisy low-level motion estimation for robust facial motion analysis. For synthesis, a face model can be deformed by adjusting the weights of Mus. The weights can also be used as visual features to learn audio-to-visual mapping using neural networks for real-time, speech-driven, 3D face animation. Moreover, the framework includes parts-based MUs because of the local facial motion and an interpolation scheme to adapt MUs to arbitrary face geometry and mesh topology. Experiments show we can achieve natural face animation and robust non-rigid face tracking in our framework.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Shuo Sun ◽  
Chunbao Ge

Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image) driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR), the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs) and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA), we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.


Sign in / Sign up

Export Citation Format

Share Document