3D Face Animation Based on Raw Scanned Data of Real Individuals

Author(s):  
Liyan Zhang ◽  
Gregory M. Nielson
Keyword(s):  
3D Face ◽  
2011 ◽  
pp. 317-340
Author(s):  
Zhen Wen ◽  
Pengyu Hong ◽  
Jilin Tu ◽  
Thomas S. Huang

This chapter presents a unified framework for machine-learning-based facial deformation modeling, analysis and synthesis. It enables flexible, robust face motion analysis and natural synthesis, based on a compact face motion model learned from motion capture data. This model, called Motion Units (Muss), captures the characteristics of real facial motion. The MU space can be used to constrain noisy low-level motion estimation for robust facial motion analysis. For synthesis, a face model can be deformed by adjusting the weights of Mus. The weights can also be used as visual features to learn audio-to-visual mapping using neural networks for real-time, speech-driven, 3D face animation. Moreover, the framework includes parts-based MUs because of the local facial motion and an interpolation scheme to adapt MUs to arbitrary face geometry and mesh topology. Experiments show we can achieve natural face animation and robust non-rigid face tracking in our framework.


Author(s):  
Zhen Wen ◽  
Pengyu Hong ◽  
Jilin Tu ◽  
Thomas S. Huang

A synthetic human face is useful for visualizing information related to the human face. The applications include visual telecommunication (Aizawa & Huang, 1995), virtual environments and synthetic agents (Pandzic, Ostermann, & Millen, 1999), and computer-aided education.


2011 ◽  
pp. 266-294
Author(s):  
Gregor A. Kalberer ◽  
Pascal Müller ◽  
Luc Van Gool

The problem of realistic face animation is a difficult one. This is hampering a further breakthrough of some high-tech domains, such as special effects in the movies, the use of 3D face models in communications, the use of avatars and likenesses in virtual reality, and the production of games with more subtle scenarios. This work attempts to improve on the current state-of-the-art in face animation, especially for the creation of highly realistic lip and speech-related motions. To that end, 3D models of faces are used and — based on the latest technology — speech-related 3D face motion will be learned from examples. Thus, the chapter subscribes to the surging field of image-based modeling and widens its scope to include animation. The exploitation of detailed 3D motion sequences is quite unique, thereby narrowing the gap between modeling and animation. From measured 3D face deformations around the mouth area, typical motions are extracted for different “visemes”. Visemes are the basic motion patterns observed for speech and are comparable to the phonemes of auditory speech. The visemes are studied with sufficient detail to also cover natural variations and differences between individuals. Furthermore, the transition between visemes is analyzed in terms of co-articulation effects, i.e., the visual blending of visemes as required for fluent, natural speech. The work presented in this chapter also encompasses the animation of faces for which no visemes have been observed and extracted. The “transplantation” of visemes to novel faces for which no viseme data have been recorded and for which only a static 3D model is available allows for the animation of faces without an extensive learning procedure for each individual.


2021 ◽  
Vol 7 (3) ◽  
pp. 209-219
Author(s):  
Iris J Holzleitner ◽  
Alex L Jones ◽  
Kieran J O’Shea ◽  
Rachel Cassar ◽  
Vanessa Fasolt ◽  
...  

Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D face images compare. Methods We tested whether 3D face images capture cues of strength, weight, and height better than 2D face images do by directly comparing the accuracy of strength, weight, and height ratings of 182 2D and 3D face images taken simultaneously. Strength, height and weight were rated by 66, 59 and 52 raters respectively, who viewed both 2D and 3D images. Results In line with previous studies, we found that weight and height can be judged somewhat accurately from faces; contrary to previous research, we found that people were relatively inaccurate at assessing strength. We found no evidence that physical characteristics could be judged more accurately from 3D than 2D images. Conclusion Our results suggest physical characteristics are perceived with similar accuracy from 2D and 3D face images. They also suggest that the substantial costs associated with collecting 3D face scans may not be justified for research on the accuracy of facial judgments of physical characteristics.


Sign in / Sign up

Export Citation Format

Share Document