scholarly journals MOTION DEFORMATION STYLE CONTROL TECHNIQUE FOR 3D HUMANOID CHARACTER BY USING MOCAP DATA

2015 ◽  
Vol 78 (2-2) ◽  
Author(s):  
Ismahafezi Ismail ◽  
Mohd Shahrizal Sunar ◽  
Hoshang Kolivand

Realistic humanoid 3D character movement is very important to apply in the computer games, movies, virtual reality and mixed reality environment. This paper presents a technique to deform motion style using Motion Capture (MoCap) data based on computer animation system. By using MoCap data, natural human action style could be deforming. However, the structure hierarchy of humanoid in MoCap Data is very complex. This method allows humanoid character to respond naturally based on user motion input. Unlike existing 3D humanoid character motion editor, our method produces realistic final result and simulates new dynamic humanoid motion style based on simple user interface control.

2012 ◽  
Vol 472-475 ◽  
pp. 1357-1360
Author(s):  
Xu Min Liu ◽  
Xu Zhai

Real time modeling and rendering of natural Phenomena has been a hotspot and One of the most difficult tasks in Computer Graphics, it has been found wide application in many domains such as computer animation, computer games, special effects of movie, landscaping, battlefield simulation and virtual reality etc.. Realistic simulation is generally consisting of natural elements simulation and man-made elements simulation, natural elements simulation is relatively complicated. However, in natural elements simulation trees simulation is one of the most complex technologies. In this paper, propose a method that is real-time visualization of animated trees in the wind. Compared with other previous studies, our work is to develop a physical model of real movement by the trees swaying animation. We describe the method is consistent with nature scene that branches move in the wind. Then, we describe a simple animation of trees swaying, in the local graphics processor.


2018 ◽  
pp. 1780-1807
Author(s):  
Daniel Kade ◽  
Rikard Lindell ◽  
Hakan Ürey ◽  
Oğuzhan Özcan

Current and future animations seek for more human-like motions to create believable animations for computer games, animated movies and commercial spots. A technology widely used technology is motion capture to capture actors' movements which enrich digital avatars motions and emotions. However, a motion capture environment poses challenges to actors such as short preparation times and the need to highly rely on their acting and imagination skills. To support these actors, we developed a mixed reality application that allows showing digital environments while performing and being able to see the real and virtual world. We tested our prototype with 6 traditionally trained theatre and TV actors. As a result, the actors indicated that our application supported them getting into the demanded acting moods with less unrequired emotions. The acting scenario was also better understood with less need of explanation than when just discussing the scenario, as commonly done in theatre acting.


2014 ◽  
Vol 5 (1) ◽  
pp. 1
Author(s):  
Marco Santos Souza ◽  
Aldo Wangenheim ◽  
Eros Comunello

Cloth simulation has many applications areas like computer animation, computer games or virtual reality. The simulation of cloth tearing is often a non-trivial process because it requires the capacity of dynamically updating different cloth representations and data structures. For interactive animations, this must be performed as fast as possible. We present a comprehensive description of a technique that can be used to simulate cloth being torn or cut. Despite simplistic and not physically accurate, it is fast and can provide visually pleasing results. Also, it can be easily adapted to work with nearly any cloth model. As an original contribution, we introduce an optimization of this technique by using an especially adapted half-edge data structure. We have implemented the techniques described in this paper in a physics simulator that was specially developed for a garment CAD system. Our tests have shown fast and attractive results.


Author(s):  
Daniel Kade ◽  
Rikard Lindell ◽  
Hakan Ürey ◽  
Oğuzhan Özcan

Current and future animations seek for more human-like motions to create believable animations for computer games, animated movies and commercial spots. A technology widely used technology is motion capture to capture actors' movements which enrich digital avatars motions and emotions. However, a motion capture environment poses challenges to actors such as short preparation times and the need to highly rely on their acting and imagination skills. To support these actors, we developed a mixed reality application that allows showing digital environments while performing and being able to see the real and virtual world. We tested our prototype with 6 traditionally trained theatre and TV actors. As a result, the actors indicated that our application supported them getting into the demanded acting moods with less unrequired emotions. The acting scenario was also better understood with less need of explanation than when just discussing the scenario, as commonly done in theatre acting.


2003 ◽  
Author(s):  
David Walshe ◽  
Elizabeth Lewis ◽  
Kathleen O'Sullivan ◽  
Brenda K. Wiederhold ◽  
Sun I. Kim

Author(s):  
S Leinster-Evans ◽  
J Newell ◽  
S Luck

This paper looks to expand on the INEC 2016 paper ‘The future role of virtual reality within warship support solutions for the Queen Elizabeth Class aircraft carriers’ presented by Ross Basketter, Craig Birchmore and Abbi Fisher from BAE Systems in May 2016 and the EAAW VII paper ‘Testing the boundaries of virtual reality within ship support’ presented by John Newell from BAE Systems and Simon Luck from BMT DSL in June 2017. BAE Systems and BMT have developed a 3D walkthrough training system that supports the teams working closely with the QEC Aircraft Carriers in Portsmouth and this work was presented at EAAW VII. Since then this work has been extended to demonstrate the art of the possible on Type 26. This latter piece of work is designed to explore the role of 3D immersive environments in the development and fielding of support and training solutions, across the range of support disciplines. The combined team are looking at how this digital thread leads from design of platforms, both surface and subsurface, through build into in-service support and training. This rich data and ways in which it could be used in the whole lifecycle of the ship, from design and development (used for spatial acceptance, HazID, etc) all the way through to operational support and maintenance (in conjunction with big data coming off from the ship coupled with digital tech docs for maintenance procedures) using constantly developing technologies such as 3D, Virtual Reality, Augmented Reality and Mixed Reality, will be proposed.  The drive towards gamification in the training environment to keep younger recruits interested and shortening course lengths will be explored. The paper develops the options and looks to how this technology can be used and where the value proposition lies. 


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


Sign in / Sign up

Export Citation Format

Share Document