Evaluating motion graphs for character animation

2007 ◽  
Vol 26 (4) ◽  
pp. 18 ◽  
Author(s):  
Paul S. A. Reitsma ◽  
Nancy S. Pollard
2021 ◽  
Vol 2 ◽  
Author(s):  
João Regateiro ◽  
Marco Volino ◽  
Adrian Hilton

This paper introduces Deep4D a compact generative representation of shape and appearance from captured 4D volumetric video sequences of people. 4D volumetric video achieves highly realistic reproduction, replay and free-viewpoint rendering of actor performance from multiple view video acquisition systems. A deep generative network is trained on 4D video sequences of an actor performing multiple motions to learn a generative model of the dynamic shape and appearance. We demonstrate the proposed generative model can provide a compact encoded representation capable of high-quality synthesis of 4D volumetric video with two orders of magnitude compression. A variational encoder-decoder network is employed to learn an encoded latent space that maps from 3D skeletal pose to 4D shape and appearance. This enables high-quality 4D volumetric video synthesis to be driven by skeletal motion, including skeletal motion capture data. This encoded latent space supports the representation of multiple sequences with dynamic interpolation to transition between motions. Therefore we introduce Deep4D motion graphs, a direct application of the proposed generative representation. Deep4D motion graphs allow real-tiome interactive character animation whilst preserving the plausible realism of movement and appearance from the captured volumetric video. Deep4D motion graphs implicitly combine multiple captured motions from a unified representation for character animation from volumetric video, allowing novel character movements to be generated with dynamic shape and appearance detail.


2015 ◽  
Vol 34 (2) ◽  
pp. 1-14 ◽  
Author(s):  
Peng Huang ◽  
Margara Tejera ◽  
John Collomosse ◽  
Adrian Hilton

1986 ◽  
Vol 17 (SI) ◽  
pp. 143-147 ◽  
Author(s):  
J. P. Lewis ◽  
F. I. Parke

2000 ◽  
Vol 38 (2) ◽  
pp. 69-70
Author(s):  
David Groh
Keyword(s):  

2006 ◽  
Vol 5 (2) ◽  
pp. 25-30 ◽  
Author(s):  
Christian Knöpfle ◽  
Yvonne Jung

In this paper, we will explain our approach to create and animate virtual characters for real-time rendering applications in an easy and intuitive way. Furthermore we show a way how to develop interactive storylines for such real-time environments involving the created characters. We outline useful extensions for character animation based on the VRML97 and X3D standards and describe how to incorporate commercial tools for an optimized workflow. These results were developed within the Virtual Human project. An overview of the project is included in this paper


Sign in / Sign up

Export Citation Format

Share Document