motion capture data
Recently Published Documents


TOTAL DOCUMENTS

411
(FIVE YEARS 89)

H-INDEX

24
(FIVE YEARS 3)

2021 ◽  
Author(s):  
◽  
Christopher Dean

<p>Streamlining the process of editing motion capture data and keyframe character animation is a fundamental problem in the animation field. This paper explores a new method for editing character animation, by using a data-driven pose distance as a falloff to interpolate new poses seamlessly into the sequence. This pose distance is the measure given by Green's function of the pose space Laplacian. The falloff shape and timing extent are naturally suited to the skeleton's range of motion, replacing the need for a manually customized falloff spline. This data-driven falloff is somewhat analogous to the difference between a generic spline and the ``magic wand'' selection in an image editor, but applied to the animation domain. It also supports powerful non-local edit propagation in which edits are applied to all similar poses in the entire animation sequence.</p>


2021 ◽  
Author(s):  
◽  
Christopher Dean

<p>Streamlining the process of editing motion capture data and keyframe character animation is a fundamental problem in the animation field. This paper explores a new method for editing character animation, by using a data-driven pose distance as a falloff to interpolate new poses seamlessly into the sequence. This pose distance is the measure given by Green's function of the pose space Laplacian. The falloff shape and timing extent are naturally suited to the skeleton's range of motion, replacing the need for a manually customized falloff spline. This data-driven falloff is somewhat analogous to the difference between a generic spline and the ``magic wand'' selection in an image editor, but applied to the animation domain. It also supports powerful non-local edit propagation in which edits are applied to all similar poses in the entire animation sequence.</p>


2021 ◽  
Vol 17 (34) ◽  
pp. 170-180
Author(s):  
Juan Camilo Hernandez-Gomez ◽  
Alejandro Restrepo-Martínez ◽  
Juliana Valencia-Aguirre

Clasificar el movimiento humano se ha convertido en una necesidad tecnológica, en donde para definir la posición de un sujeto requiere identificar el recorrido de las extremidades y el tronco del cuerpo, y tener la capacidad de diferenciar esta posición respecto a otros sujetos o movimientos, generándose la necesidad tener datos y algoritmos que faciliten su clasificación. Es así, como en este trabajo, se evalúa la capacidad discriminante de datos de captura de movimiento en rehabilitación física, donde la posición de los sujetos es adquirida con el Kinect de Microsoft y marcadores ópticos, y atributos del movimiento generados con el marco de Frenet Serret, evaluando su capacidad discriminante con los algoritmos máquinas de soporte vectorial, redes neuronales y k vecinos más cercanos. Los resultados presentan porcentajes de acierto del 93.5% en la clasificación con datos obtenidos del Kinect, y un éxito del 100% para los movimientos con marcadores ópticos. Classify human movement has become a technological necessity, where defining the position of a subject requires identifying the trajectory of the limbs and trunk of the body, having the ability to differentiate this position from other subjects or movements, which generates the need to have data and algorithms that help their classification. Therefore, the discriminant capacity of motion capture data in physical rehabilitation is evaluated, where the position of the subjects is acquired with the Microsoft Kinect and optical markers. Attributes of the movement generated with the Frenet Serret framework. Evaluating their discriminant capacity by means of support vector machines, neural networks, and k nearest neighbors algorithms. The obtained results present an accuracy of 93.5% in the classification with data obtained from the Kinect, and success of 100% for movements where the position is defined with optical markers.


2021 ◽  
Vol 2 ◽  
Author(s):  
João Regateiro ◽  
Marco Volino ◽  
Adrian Hilton

This paper introduces Deep4D a compact generative representation of shape and appearance from captured 4D volumetric video sequences of people. 4D volumetric video achieves highly realistic reproduction, replay and free-viewpoint rendering of actor performance from multiple view video acquisition systems. A deep generative network is trained on 4D video sequences of an actor performing multiple motions to learn a generative model of the dynamic shape and appearance. We demonstrate the proposed generative model can provide a compact encoded representation capable of high-quality synthesis of 4D volumetric video with two orders of magnitude compression. A variational encoder-decoder network is employed to learn an encoded latent space that maps from 3D skeletal pose to 4D shape and appearance. This enables high-quality 4D volumetric video synthesis to be driven by skeletal motion, including skeletal motion capture data. This encoded latent space supports the representation of multiple sequences with dynamic interpolation to transition between motions. Therefore we introduce Deep4D motion graphs, a direct application of the proposed generative representation. Deep4D motion graphs allow real-tiome interactive character animation whilst preserving the plausible realism of movement and appearance from the captured volumetric video. Deep4D motion graphs implicitly combine multiple captured motions from a unified representation for character animation from volumetric video, allowing novel character movements to be generated with dynamic shape and appearance detail.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0259464
Author(s):  
Félix Bigand ◽  
Elise Prigent ◽  
Bastien Berret ◽  
Annelies Braffort

Sign Language (SL) is a continuous and complex stream of multiple body movement features. That raises the challenging issue of providing efficient computational models for the description and analysis of these movements. In the present paper, we used Principal Component Analysis (PCA) to decompose SL motion into elementary movements called principal movements (PMs). PCA was applied to the upper-body motion capture data of six different signers freely producing discourses in French Sign Language. Common PMs were extracted from the whole dataset containing all signers, while individual PMs were extracted separately from the data of individual signers. This study provides three main findings: (1) although the data were not synchronized in time across signers and discourses, the first eight common PMs contained 94.6% of the variance of the movements; (2) the number of PMs that represented 94.6% of the variance was nearly the same for individual as for common PMs; (3) the PM subspaces were highly similar across signers. These results suggest that upper-body motion in unconstrained continuous SL discourses can be described through the dynamic combination of a reduced number of elementary movements. This opens up promising perspectives toward providing efficient automatic SL processing tools based on heavy mocap datasets, in particular for automatic recognition and generation.


2021 ◽  
Author(s):  
Luis Guilherme Silva Rodrigues ◽  
Diego Dias ◽  
Marcelo de Paiva Guimaraes ◽  
Alexandre Fonseca Brandao ◽  
Leonardo Rocha ◽  
...  

2021 ◽  
Author(s):  
Radoslaw Niewiadomski ◽  
Amrita Suresh ◽  
Alessandra Sciutti ◽  
Giuseppe DI Cesare

The form of an action, i.e. the way it is performed, conveys important information about the performer’s attitude. In this paper we investigate spatiotemporal characteristics of different gestures performed with specific vitality forms and we study whether it is possible to recognize these aspects of action automatically. As the first step, we created a new dataset of 7 gestures performed with a vitality form (gentle and rude) or without a vitality form (neutral, slow and fast). Thousand repetitions were collected from 2 professional actors. Next, we identified 22 features from the motion capture data. According to the results, vitality forms are not merely characterized by a velocity/acceleration modulation but by a combination of different spatiotemporal properties. We also perform automatic classification of vitality forms with F-score of 87.3%.


2021 ◽  
Author(s):  
Radoslaw Niewiadomski ◽  
Amrita Suresh ◽  
Alessandra Sciutti ◽  
Giuseppe DI Cesare

The form of an action, i.e. the way it is performed, conveys important information about the performer’s attitude. In this paper we investigate spatiotemporal characteristics of different gestures performed with specific vitality forms and we study whether it is possible to recognize these aspects of action automatically. As the first step, we created a new dataset of 7 gestures performed with a vitality form (gentle and rude) or without a vitality form (neutral, slow and fast). Thousand repetitions were collected from 2 professional actors. Next, we identified 22 features from the motion capture data. According to the results, vitality forms are not merely characterized by a velocity/acceleration modulation but by a combination of different spatiotemporal properties. We also perform automatic classification of vitality forms with F-score of 87.3%.


2021 ◽  
Author(s):  
Nobuyuki Oishi ◽  
Benedetta Heimler ◽  
Lloyd Pellatt ◽  
Meir Plotnik ◽  
Daniel Roggen

Sign in / Sign up

Export Citation Format

Share Document