ANIMATING AND RENDERING VIRTUAL HUMANS - Extending X3D for Real Time Rendering and Animation of Virtual Characters

Author(s):  
Nadia Magnenat-Thalmann ◽  
Daniel Thalmann
Keyword(s):  

2006 ◽  
Vol 5 (2) ◽  
pp. 25-30 ◽  
Author(s):  
Christian Knöpfle ◽  
Yvonne Jung

In this paper, we will explain our approach to create and animate virtual characters for real-time rendering applications in an easy and intuitive way. Furthermore we show a way how to develop interactive storylines for such real-time environments involving the created characters. We outline useful extensions for character animation based on the VRML97 and X3D standards and describe how to incorporate commercial tools for an optimized workflow. These results were developed within the Virtual Human project. An overview of the project is included in this paper


2006 ◽  
Vol 5 (2) ◽  
pp. 15-24 ◽  
Author(s):  
Nadia Magnenat-Thalmann ◽  
Arjan Egges

In this paper, we will present an overview of existing research in the vast area of IVH systems. We will also present our ongoing work on improving the expressive capabilities of IVHs. Because of the complexity of interaction, a high level of control is required over the face and body motions of the virtual humans. In order to achieve this, current approaches try to generate face and body motions from a high-level description. Although this indeed allows for a precise control over the movement of the virtual human, it is difficult to generate a natural-looking motion from such a high-level description. Another problem that arises when animating IVHs is that motions are not generated all the time. Therefore a flexible animation scheme is required that ensures a natural posture even when no animation is playing. We will present MIRAnim, our animation engine, which uses a combination of motion synthesis from motion capture and a statistical analysis of prerecorded motion clips. As opposed to existing approaches that create new motions with limited flexibility, our model adapts existing motions, by automatically adding dependent joint motions. This renders the animation more natural, but since our model does not impose any conditions on the input motion, it can be linked easily with existing gesture synthesis techniques for IVHs. Because we use a linear representation for joint orientations, blending and interpolation is done very efficiently, resulting in an animation engine especially suitable for real-time applications


2006 ◽  
Author(s):  
Manuel Peinado ◽  
Daniel Meziat ◽  
Ronan Boulic ◽  
Daniel Raunhardt

2010 ◽  
Vol 2010 ◽  
pp. 1-12 ◽  
Author(s):  
Li Zhang ◽  
John Barnden

We report our developments on metaphor and affect sensing for several metaphorical language phenomena including affects as external entities metaphor, food metaphor, animal metaphor, size metaphor, and anger metaphor. The metaphor and affect sensing component has been embedded in a conversational intelligent agent interacting with human users under loose scenarios. Evaluation for the detection of several metaphorical language phenomena and affect is provided. Our paper contributes to the journal themes on believable virtual characters in real-time narrative environment, narrative in digital games and storytelling and educational gaming with social software.


Sign in / Sign up

Export Citation Format

Share Document