Hierarchical Motion Controllers for Real-Time Autonomous Virtual Humans

Author(s):  
Marcelo Kallmann ◽  
Stacy Marsella
2007 ◽  
Vol 30 (5) ◽  
pp. 829-842 ◽  
Author(s):  
Bing‐Fei Wu ◽  
Chao‐Jung Chen ◽  
Hsin‐Han Chiang ◽  
Hsin‐Yuan Peng ◽  
Jau‐Woei Perng ◽  
...  

Author(s):  
Nadia Magnenat-Thalmann ◽  
Daniel Thalmann
Keyword(s):  

2006 ◽  
Vol 5 (2) ◽  
pp. 15-24 ◽  
Author(s):  
Nadia Magnenat-Thalmann ◽  
Arjan Egges

In this paper, we will present an overview of existing research in the vast area of IVH systems. We will also present our ongoing work on improving the expressive capabilities of IVHs. Because of the complexity of interaction, a high level of control is required over the face and body motions of the virtual humans. In order to achieve this, current approaches try to generate face and body motions from a high-level description. Although this indeed allows for a precise control over the movement of the virtual human, it is difficult to generate a natural-looking motion from such a high-level description. Another problem that arises when animating IVHs is that motions are not generated all the time. Therefore a flexible animation scheme is required that ensures a natural posture even when no animation is playing. We will present MIRAnim, our animation engine, which uses a combination of motion synthesis from motion capture and a statistical analysis of prerecorded motion clips. As opposed to existing approaches that create new motions with limited flexibility, our model adapts existing motions, by automatically adding dependent joint motions. This renders the animation more natural, but since our model does not impose any conditions on the input motion, it can be linked easily with existing gesture synthesis techniques for IVHs. Because we use a linear representation for joint orientations, blending and interpolation is done very efficiently, resulting in an animation engine especially suitable for real-time applications


2021 ◽  
Vol 11 (22) ◽  
pp. 10713
Author(s):  
Dong-Gyu Lee

Autonomous driving is a safety-critical application that requires a high-level understanding of computer vision with real-time inference. In this study, we focus on the computational efficiency of an important factor by improving the running time and performing multiple tasks simultaneously for practical applications. We propose a fast and accurate multi-task learning-based architecture for joint segmentation of drivable area, lane line, and classification of the scene. An encoder-decoder architecture efficiently handles input frames through shared representation. A comprehensive understanding of the driving environment is improved by generalization and regularization from different tasks. The proposed method learns end-to-end through multi-task learning on a very challenging Berkeley Deep Drive dataset and shows its robustness for three tasks in autonomous driving. Experimental results show that the proposed method outperforms other multi-task learning approaches in both speed and accuracy. The computational efficiency of the method was over 93.81 fps at inference, enabling execution in real-time.


Author(s):  
Yuto Otsuki ◽  
Blair Thornton ◽  
Toshihiro Maki ◽  
Yuya Nishida ◽  
Adrian Bodenmann ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document