OpenFACS: An Open Source FACS-Based 3D Face Animation System

Author(s):  
Vittorio Cuculo ◽  
Alessandro D’Amelio
2011 ◽  
pp. 317-340
Author(s):  
Zhen Wen ◽  
Pengyu Hong ◽  
Jilin Tu ◽  
Thomas S. Huang

This chapter presents a unified framework for machine-learning-based facial deformation modeling, analysis and synthesis. It enables flexible, robust face motion analysis and natural synthesis, based on a compact face motion model learned from motion capture data. This model, called Motion Units (Muss), captures the characteristics of real facial motion. The MU space can be used to constrain noisy low-level motion estimation for robust facial motion analysis. For synthesis, a face model can be deformed by adjusting the weights of Mus. The weights can also be used as visual features to learn audio-to-visual mapping using neural networks for real-time, speech-driven, 3D face animation. Moreover, the framework includes parts-based MUs because of the local facial motion and an interpolation scheme to adapt MUs to arbitrary face geometry and mesh topology. Experiments show we can achieve natural face animation and robust non-rigid face tracking in our framework.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Shuo Sun ◽  
Chunbao Ge

Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image) driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR), the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs) and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA), we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.


Author(s):  
Zhen Wen ◽  
Pengyu Hong ◽  
Jilin Tu ◽  
Thomas S. Huang

A synthetic human face is useful for visualizing information related to the human face. The applications include visual telecommunication (Aizawa & Huang, 1995), virtual environments and synthetic agents (Pandzic, Ostermann, & Millen, 1999), and computer-aided education.


2011 ◽  
pp. 266-294
Author(s):  
Gregor A. Kalberer ◽  
Pascal Müller ◽  
Luc Van Gool

The problem of realistic face animation is a difficult one. This is hampering a further breakthrough of some high-tech domains, such as special effects in the movies, the use of 3D face models in communications, the use of avatars and likenesses in virtual reality, and the production of games with more subtle scenarios. This work attempts to improve on the current state-of-the-art in face animation, especially for the creation of highly realistic lip and speech-related motions. To that end, 3D models of faces are used and — based on the latest technology — speech-related 3D face motion will be learned from examples. Thus, the chapter subscribes to the surging field of image-based modeling and widens its scope to include animation. The exploitation of detailed 3D motion sequences is quite unique, thereby narrowing the gap between modeling and animation. From measured 3D face deformations around the mouth area, typical motions are extracted for different “visemes”. Visemes are the basic motion patterns observed for speech and are comparable to the phonemes of auditory speech. The visemes are studied with sufficient detail to also cover natural variations and differences between individuals. Furthermore, the transition between visemes is analyzed in terms of co-articulation effects, i.e., the visual blending of visemes as required for fluent, natural speech. The work presented in this chapter also encompasses the animation of faces for which no visemes have been observed and extracted. The “transplantation” of visemes to novel faces for which no viseme data have been recorded and for which only a static 3D model is available allows for the animation of faces without an extensive learning procedure for each individual.


Author(s):  
Fadi P. Deek ◽  
James A. M. McHugh
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document