character animation
Recently Published Documents


TOTAL DOCUMENTS

338
(FIVE YEARS 55)

H-INDEX

21
(FIVE YEARS 3)

2022 ◽  
Vol 355 ◽  
pp. 03043
Author(s):  
Yushan Zhong ◽  
Yifan Jia ◽  
Liang Ma

In order to cultivate children’s imagination and creativity in the cognitive process, combined with the traditional hand shadow game, a children’s gesture education game based on AI gesture recognition technology is designed and developed. The game uses unity development platform, with children’s digital gesture recognition as the content, designs and implements the basic functions involved in the game, including AI gesture recognition function, character animation function, interface interaction function, AR photo taking function and question answering system function. The game is finally released on the mobile terminal. Players can recognize gestures through mobile cameras, interact with virtual cartoon characters in the game, watch cartoon character animation, understand popular science knowledge, and complete the answers in the game. The educational games can better assist children to learn digital gestures, enrich children’s ways of cognition, expand children’s imagination, and let children learn easily with happy educational games.


2021 ◽  
Author(s):  
Ran Dong ◽  
Yangfei Lin ◽  
Qiong Chang ◽  
Junpei Zhong ◽  
Dongsheng Cai ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Weihua Zhang ◽  
Sang-Bing Tsai

In this paper, we design a corpus-based 3D animation digital media system to improve the accuracy of 3D animation generation and realize crossplatform animation display. The corpus module extracts high-precision data through web crawling, web cleaning, Chinese word separation, and text classification steps; the character animation generation module uses the semantic description method to expand the frame information description of the extracted data, calculates the object spatial 3D coordinates, and uses the built-in animation execution script to generate 3D character animation; the improved digital media player module uses the improved digital media player to realize crossplatform display of 3D character animations using the improved digital media player. By constructing multidimensional character relationships and combining multiple visualization methods, the complex and multifaceted social relationship network is made available to users in an intuitive and more acceptable and understandable mode. Through a large number of user surveys, it is proved that the visual analysis method combining real social and virtual social proposed in this paper provides a more adequate and reliable basis for friend recommendation and social network analysis; the combination of multiple character relationships with geographical information and the use of visualization to describe multidimensional historical character relationships provides a new research perspective for the research and exploration of humanistic neighborhoods. The experimental results prove that the designed system can effectively read known contents and extract keywords and generate 3D animation based on keyword features, with a high accuracy rate, fast response time, small frame loss rate, and crossplatform display animation advantages.


2021 ◽  
Author(s):  
◽  
Christopher Dean

<p>Streamlining the process of editing motion capture data and keyframe character animation is a fundamental problem in the animation field. This paper explores a new method for editing character animation, by using a data-driven pose distance as a falloff to interpolate new poses seamlessly into the sequence. This pose distance is the measure given by Green's function of the pose space Laplacian. The falloff shape and timing extent are naturally suited to the skeleton's range of motion, replacing the need for a manually customized falloff spline. This data-driven falloff is somewhat analogous to the difference between a generic spline and the ``magic wand'' selection in an image editor, but applied to the animation domain. It also supports powerful non-local edit propagation in which edits are applied to all similar poses in the entire animation sequence.</p>


2021 ◽  
Author(s):  
◽  
Christopher Dean

<p>Streamlining the process of editing motion capture data and keyframe character animation is a fundamental problem in the animation field. This paper explores a new method for editing character animation, by using a data-driven pose distance as a falloff to interpolate new poses seamlessly into the sequence. This pose distance is the measure given by Green's function of the pose space Laplacian. The falloff shape and timing extent are naturally suited to the skeleton's range of motion, replacing the need for a manually customized falloff spline. This data-driven falloff is somewhat analogous to the difference between a generic spline and the ``magic wand'' selection in an image editor, but applied to the animation domain. It also supports powerful non-local edit propagation in which edits are applied to all similar poses in the entire animation sequence.</p>


2021 ◽  
Author(s):  
◽  
John Lewis

<p>Movie and game production is very laborious, frequently involving hundreds of person-years for a single project. At present this work is difficult to fully automate, since it involves subjective and artistic judgments.  Broadly speaking, in this thesis we explore an approach that works with the artist, accelerating their work without attempting to replace them. More specifically, we describe an “example-based” approach, in which artists provide examples of the desired shapes of the character, and the results gradually improve as more examples are given. Since a character’s skin shape deforms as the pose or expression changes, or particular problem will be termed character deformation.  The overall goal of this thesis is to contribute a complete investigation and development of an example-based approach to character deformation. A central observation guiding this research is that character animation can be formulated as a high-dimensional problem, rather than the two- or three-dimensional viewpoint that is commonly adopted in computer graphics. A second observation guiding our inquiry is that statistical learning concepts are relevant. We show that example-based character animation algorithms can be informed, developed, and improved using these observations.  This thesis provides definitive surveys of example-based facial and body skin deformation.  This thesis analyzes the two leading families of example-based character deformation algorithms from the point of view of statistical regression. In doing so we show that a wide variety of existing tools in machine learning are applicable to our problem. We also identify several techniques that are not suitable due to the nature of the training data, and the high-dimensional nature of this regression problem. We evaluate the design decisions underlying these example-based algorithms, thus providing the groundwork for a ”best practice” choice of specific algorithms.  This thesis develops several new algorithms for accelerating example-based facial animation. The first algorithm allows unspecified degrees of freedom to be automatically determined based on the style of previous, completed animations. A second algorithm allows rapid editing and control of the process of transferring motion capture of a human actor to a computer graphics character.  The thesis identifies and develops several unpublished relations between the underlying mathematical techniques.  Lastly, the thesis provides novel tutorial derivations of several mathematical concepts, using only the linear algebra tools that are likely to be familiar to experts in computer graphics.  Portions of the research in this thesis have been published in eight papers, with two appearing in premier forums in the field.</p>


2021 ◽  
Author(s):  
◽  
John Lewis

<p>Movie and game production is very laborious, frequently involving hundreds of person-years for a single project. At present this work is difficult to fully automate, since it involves subjective and artistic judgments.  Broadly speaking, in this thesis we explore an approach that works with the artist, accelerating their work without attempting to replace them. More specifically, we describe an “example-based” approach, in which artists provide examples of the desired shapes of the character, and the results gradually improve as more examples are given. Since a character’s skin shape deforms as the pose or expression changes, or particular problem will be termed character deformation.  The overall goal of this thesis is to contribute a complete investigation and development of an example-based approach to character deformation. A central observation guiding this research is that character animation can be formulated as a high-dimensional problem, rather than the two- or three-dimensional viewpoint that is commonly adopted in computer graphics. A second observation guiding our inquiry is that statistical learning concepts are relevant. We show that example-based character animation algorithms can be informed, developed, and improved using these observations.  This thesis provides definitive surveys of example-based facial and body skin deformation.  This thesis analyzes the two leading families of example-based character deformation algorithms from the point of view of statistical regression. In doing so we show that a wide variety of existing tools in machine learning are applicable to our problem. We also identify several techniques that are not suitable due to the nature of the training data, and the high-dimensional nature of this regression problem. We evaluate the design decisions underlying these example-based algorithms, thus providing the groundwork for a ”best practice” choice of specific algorithms.  This thesis develops several new algorithms for accelerating example-based facial animation. The first algorithm allows unspecified degrees of freedom to be automatically determined based on the style of previous, completed animations. A second algorithm allows rapid editing and control of the process of transferring motion capture of a human actor to a computer graphics character.  The thesis identifies and develops several unpublished relations between the underlying mathematical techniques.  Lastly, the thesis provides novel tutorial derivations of several mathematical concepts, using only the linear algebra tools that are likely to be familiar to experts in computer graphics.  Portions of the research in this thesis have been published in eight papers, with two appearing in premier forums in the field.</p>


2021 ◽  
Vol 30 (6) ◽  
pp. 1038-1048
Author(s):  
YIN Qinran ◽  
CAO Weiqun
Keyword(s):  

Animation ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. 110-125
Author(s):  
Amy Skjerseth

There is a tendency in animation studies to discuss sound in the language of images, stressing sound’s alignment with visual cues (as in mickey mousing and leitmotifs). But sounds do not only mimic images: they add textures and emotions that change what we see. This article explores grain (texture) and timbre (tone color produced by specific instruments and techniques) as qualities shared by visual and sonic material. To do so, the author closely reads Sand or Peter and the Wolf (1969), where Caroline Leaf’s haptic sand animation is matched by Michael Riesman’s electroacoustic score. Leaf painstakingly molds animals by scraping away individual sand grains, and Riesman sculpts sonic textures with tiny adjustments to knobs and touch-sensitive pads on the Buchla modular synthesizer. Their collective improvisation with sands and sounds reveals new ways to think about artists’ material practices and the friction and interplay between images and sounds. They encourage spectators to perceive the animals as not merely plasmatic, or Sergei Eisenstein’s notion of contour-bending character animation. Instead, Leaf and Riesman deploy what the author calls ‘granular modulation’, expressing sand and animals with sensuous materiality. In Leaf’s and Riesman’s improvisations, grainy textures are the seeds of understanding how sound and vision become symbiotic – and encounter friction – in animation.


2021 ◽  
Vol 2 ◽  
Author(s):  
João Regateiro ◽  
Marco Volino ◽  
Adrian Hilton

This paper introduces Deep4D a compact generative representation of shape and appearance from captured 4D volumetric video sequences of people. 4D volumetric video achieves highly realistic reproduction, replay and free-viewpoint rendering of actor performance from multiple view video acquisition systems. A deep generative network is trained on 4D video sequences of an actor performing multiple motions to learn a generative model of the dynamic shape and appearance. We demonstrate the proposed generative model can provide a compact encoded representation capable of high-quality synthesis of 4D volumetric video with two orders of magnitude compression. A variational encoder-decoder network is employed to learn an encoded latent space that maps from 3D skeletal pose to 4D shape and appearance. This enables high-quality 4D volumetric video synthesis to be driven by skeletal motion, including skeletal motion capture data. This encoded latent space supports the representation of multiple sequences with dynamic interpolation to transition between motions. Therefore we introduce Deep4D motion graphs, a direct application of the proposed generative representation. Deep4D motion graphs allow real-tiome interactive character animation whilst preserving the plausible realism of movement and appearance from the captured volumetric video. Deep4D motion graphs implicitly combine multiple captured motions from a unified representation for character animation from volumetric video, allowing novel character movements to be generated with dynamic shape and appearance detail.


Sign in / Sign up

Export Citation Format

Share Document