A system for real-time synthesis of subtle expressivity for life-like MPEG-4 based virtual characters

Author(s):  
C. Bonamico ◽  
C. Braccini ◽  
F. Lavagetto ◽  
M. Costa
Keyword(s):  
2006 ◽  
Vol 5 (2) ◽  
pp. 25-30 ◽  
Author(s):  
Christian Knöpfle ◽  
Yvonne Jung

In this paper, we will explain our approach to create and animate virtual characters for real-time rendering applications in an easy and intuitive way. Furthermore we show a way how to develop interactive storylines for such real-time environments involving the created characters. We outline useful extensions for character animation based on the VRML97 and X3D standards and describe how to incorporate commercial tools for an optimized workflow. These results were developed within the Virtual Human project. An overview of the project is included in this paper


2010 ◽  
Vol 2010 ◽  
pp. 1-12 ◽  
Author(s):  
Li Zhang ◽  
John Barnden

We report our developments on metaphor and affect sensing for several metaphorical language phenomena including affects as external entities metaphor, food metaphor, animal metaphor, size metaphor, and anger metaphor. The metaphor and affect sensing component has been embedded in a conversational intelligent agent interacting with human users under loose scenarios. Evaluation for the detection of several metaphorical language phenomena and affect is provided. Our paper contributes to the journal themes on believable virtual characters in real-time narrative environment, narrative in digital games and storytelling and educational gaming with social software.


2020 ◽  
Vol 6 (4) ◽  
pp. 43-54 ◽  
Author(s):  
Martin Klesen ◽  
Patrick Gebhard

In this paper we report about the use of computer generated affect to control body and mind of cognitively modeled virtual characters. We use the computational model of affect ALMA that is able to simulate three different affect types in real-time. The computation of affect is based on a novel approach of an appraisal language. Both the use of elements of the appraisal language and the simulation of different affect types has been evaluated. Affect is used to control facial expressions, facial complexions, affective animations, posture, and idle behavior on the body layer and the selection of dialogue strategies on the mind layer. To enable a fine-grained control of these aspects a Player Markup Language (PML) has been developed. The PML is player-independent and allows a sophisticated control of character actions coordinated by high-level temporal constraints. An Action Encoder module maps the output of ALMA to PML actions using affect display rules. These actions drive the real-time rendering of affect, gesture and speech parameters of virtual characters, which we call Virtual Humans. 


2011 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Roberto Cesar Cavalcante Vieira ◽  
Creto Vidal ◽  
Joaquim Bento Cavalcante-Neto

Virtual tridimensional creatures are active actors in many types of applications nowadays, such as virtual reality, games and computer animation. The virtual actors encountered in those applications are very diverse, but usually have humanlike behavior and facial expressions. This paper deals with the mapping of facial expressions between virtual characters, based on anthropometric proportions and geometric manipulations by moving influence zones. Facial proportions of a base model is used to transfer expressions to any other model with similar global characteristics (if the base model is a human, for instance, the other models need to have two eyes, one nose and one mouth). With this solution, it is possible to insert new virtual characters in real-time applications without having to go through the tedious process of customizing the characters’ emotions.


2007 ◽  
Vol 6 (4) ◽  
pp. 31-42 ◽  
Author(s):  
Markus Löckelt ◽  
Norbert Pfleger ◽  
Norbert Reithinger

The interactive scenarios realized in the two prototypes of Virtual Human require an approach that allows humans and virtual characters to interact naturally and flexibly. In this article we present how the autonomous control of the virtual characters and the interpretation of user interactions is realized in the Conversational Dialogue Engine (CDE) framework. For each virtual and real interlocutor one CDE is responsible for dialogue processing. We will introduce the knowledge needed for the CDE-approach and present the modules of a CDE. The real-time requirement resulted in the integrated processing of deliberative and reactive processing, which is needed, e.g., to generate an appropriate nonverbal behavior of virtual characters.


Sign in / Sign up

Export Citation Format

Share Document