scholarly journals Estimation of gait kinematics and kinetics from inertial sensor data using optimal control of musculoskeletal models

2019 ◽  
Vol 95 ◽  
pp. 109278 ◽  
Author(s):  
Eva Dorschky ◽  
Marlies Nitschke ◽  
Ann-Kristin Seifer ◽  
Antonie J. van den Bogert ◽  
Bjoern M. Eskofier
Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4535
Author(s):  
Marion Mundt ◽  
William R. Johnson ◽  
Wolfgang Potthast ◽  
Bernd Markert ◽  
Ajmal Mian ◽  
...  

The application of artificial intelligence techniques to wearable sensor data may facilitate accurate analysis outside of controlled laboratory settings—the holy grail for gait clinicians and sports scientists looking to bridge the lab to field divide. Using these techniques, parameters that are difficult to directly measure in-the-wild, may be predicted using surrogate lower resolution inputs. One example is the prediction of joint kinematics and kinetics based on inputs from inertial measurement unit (IMU) sensors. Despite increased research, there is a paucity of information examining the most suitable artificial neural network (ANN) for predicting gait kinematics and kinetics from IMUs. This paper compares the performance of three commonly employed ANNs used to predict gait kinematics and kinetics: multilayer perceptron (MLP); long short-term memory (LSTM); and convolutional neural networks (CNN). Overall high correlations between ground truth and predicted kinematic and kinetic data were found across all investigated ANNs. However, the optimal ANN should be based on the prediction task and the intended use-case application. For the prediction of joint angles, CNNs appear favourable, however these ANNs do not show an advantage over an MLP network for the prediction of joint moments. If real-time joint angle and joint moment prediction is desirable an LSTM network should be utilised.


2020 ◽  
Vol 53 (2) ◽  
pp. 15990-15997
Author(s):  
Felix Laufer ◽  
Michael Lorenz ◽  
Bertram Taetz ◽  
Gabriele Bleser

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


2021 ◽  
Vol 185 ◽  
pp. 282-291
Author(s):  
Nizam U. Ahamed ◽  
Kellen T. Krajewski ◽  
Camille C. Johnson ◽  
Adam J. Sterczala ◽  
Julie P. Greeves ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2480
Author(s):  
Isidoro Ruiz-García ◽  
Ismael Navarro-Marchal ◽  
Javier Ocaña-Wilhelmi ◽  
Alberto J. Palma ◽  
Pablo J. Gómez-López ◽  
...  

In skiing it is important to know how the skier accelerates and inclines the skis during the turn to avoid injuries and improve technique. The purpose of this pilot study with three participants was to develop and evaluate a compact, wireless, and low-cost system for detecting the inclination and acceleration of skis in the field based on inertial measurement units (IMU). To that end, a commercial IMU board was placed on each ski behind the skier boot. With the use of an attitude and heading reference system algorithm included in the sensor board, the orientation and attitude data of the skis were obtained (roll, pitch, and yaw) by IMU sensor data fusion. Results demonstrate that the proposed IMU-based system can provide reliable low-drifted data up to 11 min of continuous usage in the worst case. Inertial angle data from the IMU-based system were compared with the data collected by a video-based 3D-kinematic reference system to evaluate its operation in terms of data correlation and system performance. Correlation coefficients between 0.889 (roll) and 0.991 (yaw) were obtained. Mean biases from −1.13° (roll) to 0.44° (yaw) and 95% limits of agreements from 2.87° (yaw) to 6.27° (roll) were calculated for the 1-min trials. Although low mean biases were achieved, some limitations arose in the system precision for pitch and roll estimations that could be due to the low sampling rate allowed by the sensor data fusion algorithm and the initial zeroing of the gyroscope.


2011 ◽  
Vol 467-469 ◽  
pp. 108-113
Author(s):  
Xin Yu Li ◽  
Dong Yi Chen

Accurate tracking for Augmented Reality applications is a challenging task. Multi-sensors hybrid tracking generally provide more stable than the effect of the single visual tracking. This paper presents a new tightly-coupled hybrid tracking approach combining vision-based systems with inertial sensor. Based on multi-frequency sampling theory in the measurement data synchronization, a strong tracking filter (STF) is used to smooth sensor data and estimate position and orientation. Through adding time-varying fading factor to adaptively adjust the prediction error covariance of filter, this method improves the performance of tracking for fast moving targets. Experimental results show the efficiency and robustness of this proposed approach.


2012 ◽  
Vol 27 (2) ◽  
pp. 131-137 ◽  
Author(s):  
Janaine Cunha Polese ◽  
Luci Fuscaldi Teixeira-Salmela ◽  
Lucas Rodrigues Nascimento ◽  
Christina Danielli Morais Faria ◽  
Renata Noce Kirkwood ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document