motion editing
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 5)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
Author(s):  
◽  
Richard Roberts

<p>Motion capture is attractive to visual effects studios because it offers a fast and automatic way to create animation directly from actors' movements. Despite extensive research efforts toward motion capture processing and motion editing, animations created using motion capture are notoriously difficult to edit. We investigate this problem and develop a technique to reverse engineer editable keyframe animation from motion capture.  Our technique for converting motion capture into editable animation is to select keyframes from the motion capture that correspond to those an animator might have used to create the motion from scratch. As the first contribution presented by this thesis, we survey both traditional and contemporary animation practice to define the types of keyframes created by animators following conventional animation practices. As the second contribution, we develop a new keyframe selection algorithm that uses a generic objective function; using different implementations, we can define different criteria to which keyframes are selected. After presenting the algorithm, we return to the problem of converting motion capture into editable animation and design three implementations of the objective function that can be used together to select animator-like keyframes. Finally, as a minor contribution to conclude the thesis, we present a simple interpolation algorithm that can be used to construct a new animation from only the selected keyframes.  In contrast to previous research in the topic of keyframe selection, our technique is novel in that we have designed it to provide selections of keyframes that are similar in structure to those used by animators following conventional practices. Consequently, both animators and motion editors can adjust the resulting animation in much the same way as their own, manually created, content. Furthermore, our technique offers an optimal guarantee paired with fast performance for practical editing situations, which has not yet been achieved in previous research. In conclusion, the contributions of this thesis advance the state of the art in the topic by introducing the first fast, optimal, and generic keyframe selection algorithm. Ultimately, our technique is not only well suited to the problem of recovering editable animation from motion capture, but can also be used to select keyframes for other purposes - such as compression or pattern identification - provided that an appropriate implementation of the objective function can be imagined and employed.</p>


2021 ◽  
Author(s):  
◽  
Richard Roberts

<p>Motion capture is attractive to visual effects studios because it offers a fast and automatic way to create animation directly from actors' movements. Despite extensive research efforts toward motion capture processing and motion editing, animations created using motion capture are notoriously difficult to edit. We investigate this problem and develop a technique to reverse engineer editable keyframe animation from motion capture.  Our technique for converting motion capture into editable animation is to select keyframes from the motion capture that correspond to those an animator might have used to create the motion from scratch. As the first contribution presented by this thesis, we survey both traditional and contemporary animation practice to define the types of keyframes created by animators following conventional animation practices. As the second contribution, we develop a new keyframe selection algorithm that uses a generic objective function; using different implementations, we can define different criteria to which keyframes are selected. After presenting the algorithm, we return to the problem of converting motion capture into editable animation and design three implementations of the objective function that can be used together to select animator-like keyframes. Finally, as a minor contribution to conclude the thesis, we present a simple interpolation algorithm that can be used to construct a new animation from only the selected keyframes.  In contrast to previous research in the topic of keyframe selection, our technique is novel in that we have designed it to provide selections of keyframes that are similar in structure to those used by animators following conventional practices. Consequently, both animators and motion editors can adjust the resulting animation in much the same way as their own, manually created, content. Furthermore, our technique offers an optimal guarantee paired with fast performance for practical editing situations, which has not yet been achieved in previous research. In conclusion, the contributions of this thesis advance the state of the art in the topic by introducing the first fast, optimal, and generic keyframe selection algorithm. Ultimately, our technique is not only well suited to the problem of recovering editable animation from motion capture, but can also be used to select keyframes for other purposes - such as compression or pattern identification - provided that an appropriate implementation of the objective function can be imagined and employed.</p>


2021 ◽  
Author(s):  
Yichen Peng ◽  
Chunqi Zhao ◽  
Zhengyu Huang ◽  
Tsukasa Fukusato ◽  
Haoran Xie ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Yong Zhang ◽  
Xinyu Zhang ◽  
Tao Zhang ◽  
Baocai Yin

Computer simulation is a significant technology on making great scenes of crowd in the film industry. However, current animation making process of crowd motion requires large manual operations which are time-consuming and inconvenient. To solve the above problem, this paper presents an editing method on the basis of mesh deformation that can rapidly and intuitively edit crowd movement trajectories from the perspective of time and space. The method is applied to directly generate and adjust the crowd movement as well as avoid the crash between crowd and obstacles. As for collisions within the crowd that come along with path modification problem, a time-based solution is put forward to avoid this situation by retaining relative positions of individuals. Moreover, an experiment based on a real venue was performed and the result indicates that the proposed method can not only simplify the editing operations but also improve the efficiency of crowd motion editing.


Author(s):  
Natapon Pantuwong

Recently, motion data is becoming increasingly available for creating computer animation. Motion capture is one of the systems that can generate such motion data. However, it is not suitable to capture a lot of motion due to the cost of motion capture technique, and the diculty of its postprocessing. This paper presents a timeline-based motion-editing system that enables users to perform motion-editing tasks easily and quickly. A motion sequence is summarized and displayed in the 3D environment as a set of editable icons. Users can edit the motion data by performing a sequence of operations on a single key frame or over an interval. The recorded sequence is then propagated automatically to a set of target key frames or intervals, which can be either user dened or system dened. In addition, we provide a simple interaction method for manipulating the duration of specic intervals in the motion data. Methods for combining and synchronizing two dierent motions are also provided in this system. In contrast with the previous work that allows only temporal editing, the proposed system provides editing functions for both geometry and temporal editing. We describe a user study that demonstrated the eciency of the proposed system.


2017 ◽  
Vol 13 (2) ◽  
pp. 155014771769608 ◽  
Author(s):  
Yejin Kim

Dynamic human movements such as dance are difficult to capture without using external markers due to the high complexity of a dancer’s body. This article introduces a marker-free motion capture and composition system for dance motion that uses multiple RGB and depth sensors. Our motion capture system utilizes a set of high-speed RGB and depth sensors to generate skeletal motion data from an expert dancer. During the motion acquisition process, a skeleton tracking method based on a particle filter is provided to estimate the motion parameters for each frame from a sequence of color images and depth features retrieved from the sensors. The expert motion data become archived in a database. The authoring methods in our composition system automate most of the motion editing processes for general users by providing an online motion search with an input posture and then performing motion synthesis on an arbitrary motion path. Using the proposed system, we demonstrate that various dance performances can be composed in an intuitive and efficient way on client devices such as tablets and kiosk PCs.


2016 ◽  
Vol 33 (5) ◽  
pp. 585-595
Author(s):  
Xiaobing Feng ◽  
Dengming Zhu ◽  
Zhaoqi Wang ◽  
Yi Wei

Sign in / Sign up

Export Citation Format

Share Document