scholarly journals Inertial Motion Capture-Based Whole-Body Inverse Dynamics

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7353
Author(s):  
Mohsen M. Diraneyya ◽  
JuHyeong Ryu ◽  
Eihab Abdel-Rahman ◽  
Carl T. Haas

Inertial Motion Capture (IMC) systems enable in situ studies of human motion free of the severe constraints imposed by Optical Motion Capture systems. Inverse dynamics can use those motions to estimate forces and moments developing within muscles and joints. We developed an inverse dynamic whole-body model that eliminates the usage of force plates (FPs) and uses motion patterns captured by an IMC system to predict the net forces and moments in 14 major joints. We validated the model by comparing its estimates of Ground Reaction Forces (GRFs) to the ground truth obtained from FPs and comparing predictions of the static model’s net joint moments to those predicted by 3D Static Strength Prediction Program (3DSSPP). The relative root-mean-square error (rRMSE) in the predicted GRF was 6% and the intraclass correlation of the peak values was 0.95, where both values were averaged over the subject population. The rRMSE of the differences between our model’s and 3DSSPP predictions of net L5/S1 and right and left shoulder joints moments were 9.5%, 3.3%, and 5.2%, respectively. We also compared the static and dynamic versions of the model and found that failing to account for body motions can underestimate net joint moments by 90% to 560% of the static estimates.

Author(s):  
Wolfgang Seemann ◽  
Gu¨nther Stelzner ◽  
Christian Simonidis

Inverse dynamics analysis of human motion requires that the trajectories of the selected anatomical points are known. Therefore, standard motion capture technique by tracking marker points is generally used to obtain the trajectories. The tracking process, however, introduces high-frequency noise into the trajectories and the measured data can not be used directly to proceed in the inverse dynamic analysis. A mechanical system is consistent with kinematic data if the constraint equations of position and their time derivatives are satisfied by any parameters contained in the data set. Spurious reaction forces result from violations of the constraint equations using non consistent data. Therefore, a method is applied in this paper, whereby a new set of trajectories is generated by performing a projection of the observed positions, velocities and accelerations onto the corresponding constraint manifold to ensure the consistency of the data mentioned above. Finally, the kinematics of the system is described with the corrected data set.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1801 ◽  
Author(s):  
Haitao Guo ◽  
Yunsick Sung

The importance of estimating human movement has increased in the field of human motion capture. HTC VIVE is a popular device that provides a convenient way of capturing human motions using several sensors. Recently, the motion of only users’ hands has been captured, thereby greatly reducing the range of motion captured. This paper proposes a framework to estimate single-arm orientations using soft sensors mainly by combining a Bi-long short-term memory (Bi-LSTM) and two-layer LSTM. Positions of the two hands are measured using an HTC VIVE set, and the orientations of a single arm, including its corresponding upper arm and forearm, are estimated using the proposed framework based on the estimated positions of the two hands. Given that the proposed framework is meant for a single arm, if orientations of two arms are required to be estimated, the estimations are performed twice. To obtain the ground truth of the orientations of single-arm movements, two Myo gesture-control sensory armbands are employed on the single arm: one for the upper arm and the other for the forearm. The proposed framework analyzed the contextual features of consecutive sensory arm movements, which provides an efficient way to improve the accuracy of arm movement estimation. In comparison with the ground truth, the proposed method estimated the arm movements using a dynamic time warping distance, which was the average of 73.90% less than that of a conventional Bayesian framework. The distinct feature of our proposed framework is that the number of sensors attached to end-users is reduced. Additionally, with the use of our framework, the arm orientations can be estimated with any soft sensor, and good accuracy of the estimations can be ensured. Another contribution is the suggestion of the combination of the Bi-LSTM and two-layer LSTM.


2019 ◽  
Vol 65 ◽  
pp. 68-77 ◽  
Author(s):  
Angelos Karatsidis ◽  
Moonki Jung ◽  
H. Martin Schepers ◽  
Giovanni Bellusci ◽  
Mark de Zee ◽  
...  

Author(s):  
Wenfeng Xu

AbstractTaking human movements has very good prospects of application in sports, animated projects, medicine and health and other areas. This article aims to study the human motion capture system in sports performances based on the Internet of Things technology and wireless inertial sensors. This article first introduces the theory and characteristics of the Internet of Things and motion capture; next, according to the different characteristics of the sensors in the inertial motion capture system, a two-step Kalman filter is proposed to process the accelerometer and the magnetometer separately and, finally, the structure of this article. The human body motion model is used to analyze the acceleration dynamic error that occurs during the motion. In addition, an inertial motion capture system is constructed to obtain and visualize the structure of each motion node. The experimental results in this paper show that the Kalman filtering algorithm can ensure the accuracy of angle estimation under different motion states and has good fault tolerance to external interference. Among them, the error of the static state is reduced by 23.1%.


2013 ◽  
Vol 10 (02) ◽  
pp. 1350003 ◽  
Author(s):  
JUNG-YUP KIM ◽  
YOUNG-SEOG KIM

This paper describes a whole-body motion generation scheme for an android robot using motion capture and an optimization method. Android robots basically require human-like motions due to their human-like appearances. However, they have various limitations on joint angle, and joint velocity as well as different numbers of joints and dimensions compared to humans. Because of these limitations and differences, one appropriate approach is to use an optimization technique for the motion capture data. Another important issue in whole-body motion generation is the gimbal lock problem, where a degree of freedom at the three-DOF shoulder disappears. Since the gimbal lock causes two DOFs at the shoulder joint diverge, a simple and effective strategy is required to avoid the divergence. Therefore, we propose a novel algorithm using nonlinear constrained optimization with special cost functions to cope with the aforementioned problems. To verify our algorithm, we chose a fast boxing motion that has a large range of motion and frequent gimbal lock situations as well as dynamic stepping motions. We then successfully obtained a suitable boxing motion very similar to captured human motion and also derived a zero moment point (ZMP) trajectory that is realizable for a given android robot model. Finally, quantitative and qualitative evaluations in terms of kinematics and dynamics are carried out for the derived android boxing motion.


Author(s):  
MATHIAS FONTMARTY ◽  
PATRICK DANÈS ◽  
FRÉDÉRIC LERASLE

This paper presents a thorough study of some particle filter (PF) strategies dedicated to human motion capture from a trinocular vision surveillance setup. An experimental procedure is used, based on a commercial motion capture ring to provide ground truth. Metrics are proposed to assess performances in terms of accuracy, robustness, but also estimator dispersion which is often neglected elsewhere. Relative performances are discussed through some quantitative and qualitative evaluations on a video database. PF strategies based on Quasi Monte Carlo sampling, a scheme which is surprisingly seldom exploited in the Vision community, provide an interesting way to explore. Future works are finally discussed.


Author(s):  
Therdsak Tangkuampien ◽  
David Suter

A marker-less motion capture system, based on machine learning, is proposed and tested. Pose information is inferred from images captured from multiple (as few as two) synchronized cameras. The central concept of which, we call: Kernel Subspace Mapping (KSM). The images-to-pose learning could be done with large numbers of images of a large variety of people (and with the ground truth poses accurately known). Of course, obtaining the ground-truth poses could be problematic. Here we choose to use synthetic data (both for learning and for, at least some of, testing). The system needs to generalizes well to novel inputs:unseen poses (not in the training database) and unseen actors. For the learning we use a generic and relatively low fidelity computer graphic model and for testing we sometimes use a more accurate model (made to resemble the first author). What makes machine learning viable for human motion capture is that a high percentage of human motion is coordinated. Indeed, it is now relatively well known that there is large redundancy in the set of possible images of a human (these images form som sort of relatively smooth lower dimensional manifold in the huge dimensional space of all possible images) and in the set of pose angles (again, a low dimensional and smooth sub-manifold of the moderately high dimensional space of all possible joint angles). KSM, is based on the KPCA (Kernel PCA) algorithm, which is costly. We show that the Greedy Kernel PCA (GKPCA) algorithm can be used to speed up KSM, with relatively minor modifications. At the core, then, is two KPCA’s (or two GKPCA’s) - one for the learning of pose manifold and one for the learning image manifold. Then we use a modification of Local Linear Embedding (LLE) to bridge between pose and image manifolds.


2020 ◽  
Vol 99 ◽  
pp. 109520 ◽  
Author(s):  
X. Robert-Lachaine ◽  
H. Mecheri ◽  
A. Muller ◽  
C. Larue ◽  
A. Plamondon

PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253157
Author(s):  
Saeed Ghorbani ◽  
Kimia Mahdaviani ◽  
Anne Thaler ◽  
Konrad Kording ◽  
Douglas James Cook ◽  
...  

Large high-quality datasets of human body shape and kinematics lay the foundation for modelling and simulation approaches in computer vision, computer graphics, and biomechanics. Creating datasets that combine naturalistic recordings with high-accuracy data about ground truth body shape and pose is challenging because different motion recording systems are either optimized for one or the other. We address this issue in our dataset by using different hardware systems to record partially overlapping information and synchronized data that lend themselves to transfer learning. This multimodal dataset contains 9 hours of optical motion capture data, 17 hours of video data from 4 different points of view recorded by stationary and hand-held cameras, and 6.6 hours of inertial measurement units data recorded from 60 female and 30 male actors performing a collection of 21 everyday actions and sports movements. The processed motion capture data is also available as realistic 3D human meshes. We anticipate use of this dataset for research on human pose estimation, action recognition, motion modelling, gait analysis, and body shape reconstruction.


2021 ◽  
Vol 33 (6) ◽  
pp. 1408-1422
Author(s):  
Alireza Bilesan ◽  
Shunsuke Komizunai ◽  
Teppei Tsujita ◽  
Atsushi Konno ◽  
◽  
...  

Kinect has been utilized as a cost-effective, easy-to-use motion capture sensor using the Kinect skeleton algorithm. However, a limited number of landmarks and inaccuracies in tracking the landmarks’ positions restrict Kinect’s capability. In order to increase the accuracy of motion capturing using Kinect, joint use of the Kinect skeleton algorithm and Kinect-based marker tracking was applied to track the 3D coordinates of multiple landmarks on human. The motion’s kinematic parameters were calculated using the landmarks’ positions by applying the joint constraints and inverse kinematics techniques. The accuracy of the proposed method and OptiTrack (NaturalPoint, Inc., USA) was evaluated in capturing the joint angles of a humanoid (as ground truth) in a walking test. In order to evaluate the accuracy of the proposed method in capturing the kinematic parameters of a human, lower body joint angles of five healthy subjects were extracted using a Kinect, and the results were compared to Perception Neuron (Noitom Ltd., China) and OptiTrack data during ten gait trials. The absolute agreement and consistency between each optical system and the robot data in the robot test and between each motion capture system and OptiTrack data in the human gait test were determined using intraclass correlations coefficients (ICC3). The reproducibility between systems was evaluated using Lin’s concordance correlation coefficient (CCC). The correlation coefficients with 95% confidence intervals (95%CI) were interpreted substantial for both OptiTrack and proposed method (ICC > 0.75 and CCC > 0.95) in humanoid test. The results of the human gait experiments demonstrated the advantage of the proposed method (ICC > 0.75 and RMSE = 1.1460°) over the Kinect skeleton model (ICC < 0.4 and RMSE = 6.5843°).


Sign in / Sign up

Export Citation Format

Share Document