Comparison of single-kinect and dual-kinect motion capture of upper-body joint tracking

Author(s):  
Franziska Schlagenhauf ◽  
Prachi Pratyusha Sahoo ◽  
William Singhose
2019 ◽  
Author(s):  
Heather E. Williams ◽  
Craig S. Chapman ◽  
Patrick M. Pilarski ◽  
Albert H. Vette ◽  
Jacqueline S. Hebert

AbstractBackgroundSuccessful hand-object interactions require precise hand-eye coordination with continual movement adjustments. Quantitative measurement of this visuomotor behaviour could provide valuable insight into upper limb impairments. The Gaze and Movement Assessment (GaMA) was developed to provide protocols for simultaneous motion capture and eye tracking during the administration of two functional tasks, along with data analysis methods to generate standard measures of visuomotor behaviour. The objective of this study was to investigate the reproducibility of the GaMA protocol across two independent groups of non-disabled participants, with different raters using different motion capture and eye tracking technology.MethodsTwenty non-disabled adults performed the Pasta Box Task and the Cup Transfer Task. Upper body and eye movements were recorded using motion capture and eye tracking, respectively. Measures of hand movement, angular joint kinematics, and eye gaze were compared to those from a different sample of twenty non-disabled adults who had previously performed the same protocol with different technology, rater and site.ResultsParticipants took longer to perform the tasks versus those from the earlier study, although the relative time of each movement phase was similar. Measures that were dissimilar between the groups included hand distances travelled, hand trajectories, number of movement units, eye latencies, and peak angular velocities. Similarities included all hand velocity and grip aperture measures, eye fixations, and most peak joint angle and range of motion measures.DiscussionThe reproducibility of GaMA was confirmed by this study, despite a few differences introduced by learning effects, task demonstration variation, and limitations of the kinematic model. The findings from this study provide confidence in the reliability of normative results obtained by GaMA, indicating it accurately quantifies the typical behaviours of a non-disabled population. This work advances the consideration for use of GaMA in populations with upper limb sensorimotor impairment.


Measurement ◽  
2020 ◽  
Vol 149 ◽  
pp. 107024 ◽  
Author(s):  
Ryan Sers ◽  
Steph Forrester ◽  
Esther Moss ◽  
Stephen Ward ◽  
Jianjia Ma ◽  
...  

Author(s):  
Binu M. Nair ◽  
Kimberly D. Kendricks ◽  
Vijayan K. Asari ◽  
Ronald F. Tuttle

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0259464
Author(s):  
Félix Bigand ◽  
Elise Prigent ◽  
Bastien Berret ◽  
Annelies Braffort

Sign Language (SL) is a continuous and complex stream of multiple body movement features. That raises the challenging issue of providing efficient computational models for the description and analysis of these movements. In the present paper, we used Principal Component Analysis (PCA) to decompose SL motion into elementary movements called principal movements (PMs). PCA was applied to the upper-body motion capture data of six different signers freely producing discourses in French Sign Language. Common PMs were extracted from the whole dataset containing all signers, while individual PMs were extracted separately from the data of individual signers. This study provides three main findings: (1) although the data were not synchronized in time across signers and discourses, the first eight common PMs contained 94.6% of the variance of the movements; (2) the number of PMs that represented 94.6% of the variance was nearly the same for individual as for common PMs; (3) the PM subspaces were highly similar across signers. These results suggest that upper-body motion in unconstrained continuous SL discourses can be described through the dynamic combination of a reduced number of elementary movements. This opens up promising perspectives toward providing efficient automatic SL processing tools based on heavy mocap datasets, in particular for automatic recognition and generation.


Sign in / Sign up

Export Citation Format

Share Document