scholarly journals A Fully-Automatic Gap Filling Approach for Motion Capture Trajectories

2021 ◽  
Vol 11 (21) ◽  
pp. 9847
Author(s):  
Diana Gomes ◽  
Vânia Guimarães ◽  
Joana Silva

Missing marker information is a common problem in Motion Capture (MoCap) systems. Commercial MoCap software provides several methods for reconstructing incomplete marker trajectories; however, these methods still rely on manual intervention. Current alternatives proposed in the literature still present drawbacks that prevent their widespread adoption. The lack of fully automated and universal solutions for gap filling is still a reality. We propose an automatic frame-wise gap filling routine that simultaneously explores restrictions between markers’ distance and markers’ dynamics in a least-squares minimization problem. This algorithm constitutes the main contribution of our work by simultaneously overcoming several limitations of previous methods that include not requiring manual intervention, prior training or training data; not requiring information about the skeleton or a dedicated calibration trial and by being able to reconstruct all gaps, even if these are located in the initial and final frames of a trajectory. We tested our approach in a set of artificially generated gaps, using the full body marker set, and compared the results with three methods available in commercial MoCap software: spline, pattern and rigid body fill. Our method achieved the best overall performance, presenting lower reconstruction errors in all tested conditions.

Author(s):  
Praneet C. Bala ◽  
Benjamin R. Eisenreich ◽  
Seng Bum Michael Yoo ◽  
Benjamin Y. Hayden ◽  
Hyun Soo Park ◽  
...  

The rhesus macaque is an important model species in several branches of science, including neuroscience, psychology, ethology, and several fields of medicine. The utility of the macaque model would be greatly enhanced by the ability to precisely measure its behavior, specifically, its pose (position of multiple major body landmarks) in freely moving conditions. Existing approaches do not provide sufficient tracking. Here, we describe OpenMonkeyStudio, a novel deep learning-based markerless motion capture system for estimating 3D pose in freely moving macaques in large unconstrained environments. Our system makes use of 62 precisely calibrated and synchronized machine vision cameras that encircle an open 2.45m×2.45m×2.75m enclosure. The resulting multiview image streams allow for novel data augmentation via 3D reconstruction of hand-annotated images that in turn train a robust view-invariant deep neural network model. This view invariance represents an important advance over previous markerless 2D tracking approaches, and allows fully automatic pose inference on unconstrained natural motion. We show that OpenMonkeyStudio can be used to accurately recognize actions and track two monkey social interactions without human intervention. We also make the training data (195,228 images) and trained detection model publicly available.


2011 ◽  
Author(s):  
Marco Gillies ◽  
Max Worgan ◽  
Hestia Peppe ◽  
Will Robinson ◽  
Nina Kov

Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3805
Author(s):  
Nicolas Kurpiers ◽  
Nicola Petrone ◽  
Matej Supej ◽  
Anna Wisser ◽  
Jakob Hansen ◽  
...  

Biomechanical studies of winter sports are challenging due to environmental conditions which cannot be mimicked in a laboratory. In this study, a methodological approach was developed merging 2D video recordings with sensor-based motion capture to investigate ski jump landings. A reference measurement was carried out in a laboratory, and subsequently, the method was exemplified in a field study by assessing the effect of a ski boot modification on landing kinematics. Landings of four expert skiers were filmed under field conditions in the jump plane, and full body kinematics were measured with an inertial motion unit (IMU) -based motion capture suit. This exemplary study revealed that the combination of video and IMU data is viable. However, only one skier was able to make use of the added boot flexibility, likely due to an extended training time with the modified boot. In this case, maximum knee flexion changed by 36° and maximum ankle flexion by 13°, whereas the other three skiers changed only marginally. The results confirm that 2D video merged with IMU data are suitable for jump analyses in winter sports, and that the modified boot will allow for alterations in landing technique provided that enough time for training is given.


2016 ◽  
Author(s):  
Jun-Whan Lee ◽  
Sun-Cheon Park ◽  
Duk Kee Lee ◽  
Jong Ho Lee

Abstract. Timely detection of tsunamis with water-level records is a critical but logistically challenging task because of outliers and gaps. We propose a tsunami arrival time detection system (TADS) that can be applied to discontinuous time-series data with outliers. TADS consists of three major algorithms that are designed to update at every new data acquisition: outlier detection, gap-filling, and tsunami detection. To detect a tsunami from a record containing outliers and gaps, we propose the concept of the event period. In this study, we applied this concept in our test of the TADS at the Ulleung-do surge gauge located in the East Sea. We calibrated the thresholds to identify tsunami arrivals based on the 2011 Tohoku tsunami, and the results show that the overall performance of TADS is effective at detecting a small tsunami signal superimposed on both an outlier and gap.


2013 ◽  
Vol 24 ◽  
pp. 1360003 ◽  
Author(s):  
HONG YAO ◽  
XIAO-PING REN ◽  
JIAN WANG ◽  
RUI-LIN ZHONG ◽  
JING-AN DING

The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.


2021 ◽  
Vol 15 ◽  
Author(s):  
Ilja Arent ◽  
Florian P. Schmidt ◽  
Mario Botsch ◽  
Volker Dürr

Motion capture of unrestrained moving animals is a major analytic tool in neuroethology and behavioral physiology. At present, several motion capture methodologies have been developed, all of which have particular limitations regarding experimental application. Whereas marker-based motion capture systems are very robust and easily adjusted to suit different setups, tracked species, or body parts, they cannot be applied in experimental situations where markers obstruct the natural behavior (e.g., when tracking delicate, elastic, and/or sensitive body structures). On the other hand, marker-less motion capture systems typically require setup- and animal-specific adjustments, for example by means of tailored image processing, decision heuristics, and/or machine learning of specific sample data. Among the latter, deep-learning approaches have become very popular because of their applicability to virtually any sample of video data. Nevertheless, concise evaluation of their training requirements has rarely been done, particularly with regard to the transfer of trained networks from one application to another. To address this issue, the present study uses insect locomotion as a showcase example for systematic evaluation of variation and augmentation of the training data. For that, we use artificially generated video sequences with known combinations of observed, real animal postures and randomized body position, orientation, and size. Moreover, we evaluate the generalization ability of networks that have been pre-trained on synthetic videos to video recordings of real walking insects, and estimate the benefit in terms of reduced requirement for manual annotation. We show that tracking performance is affected only little by scaling factors ranging from 0.5 to 1.5. As expected from convolutional networks, the translation of the animal has no effect. On the other hand, we show that sufficient variation of rotation in the training data is essential for performance, and make concise suggestions about how much variation is required. Our results on transfer from synthetic to real videos show that pre-training reduces the amount of necessary manual annotation by about 50%.


Sign in / Sign up

Export Citation Format

Share Document