scholarly journals Human Gait-labeling Uncertainty and a Hybrid Model for Gait Segmentation

Author(s):  
Jiaen Wu ◽  
Henrik Maurenbrecher ◽  
Alessandro Schaer ◽  
Barna Becsek ◽  
Chris Awai Easthope ◽  
...  

<div><div><div><p>Motion capture systems are widely accepted as ground-truth for gait analysis and are used for the validation of other gait analysis systems.To date, their reliability and limitations in manual labeling of gait events have not been studied.</p><p><b>Objectives</b>: Evaluate human manual labeling uncertainty and introduce a new hybrid gait analysis model for long-term monitoring.</p><p><b>Methods</b>: Evaluate and estimate inter-labeler inconsistencies by computing the limits-of-agreement; develop a model based on dynamic time warping and convolutional neural network to identify a valid stride and eliminate non-stride data in walking inertial data collected by a wearable device; Gait events are detected within a valid stride region afterwards; This method makes the subsequent data computation more efficient and robust.</p><p><b>Results</b>: The limits of inter-labeler agreement for key</p><p>gait events of heel off, toe off, heel strike, and flat foot are 72 ms, 16 ms, 22 ms, and 80 ms, respectively; The hybrid model's classification accuracy for a stride and a non-stride are 95.16% and 84.48%, respectively; The mean absolute error for detected heel off, toe off, heel strike, and flat foot are 24 ms, 5 ms, 9 ms, and 13 ms, respectively.</p><p><b>Conclusions</b>: The results show the inherent label uncertainty and the limits of human gait labeling of motion capture data; The proposed hybrid-model's performance is comparable to that of human labelers and it is a valid model to reliably detect strides in human gait data.</p><p><b>Significance</b>: This work establishes the foundation for fully automated human gait analysis systems with performances comparable to human-labelers.</p></div></div></div>

2021 ◽  
Author(s):  
Jiaen Wu ◽  
Henrik Maurenbrecher ◽  
Alessandro Schaer ◽  
Barna Becsek ◽  
Chris Awai Easthope ◽  
...  

<div><div><div><p>Motion capture systems are widely accepted as ground-truth for gait analysis and are used for the validation of other gait analysis systems.To date, their reliability and limitations in manual labeling of gait events have not been studied.</p><p><b>Objectives</b>: Evaluate human manual labeling uncertainty and introduce a new hybrid gait analysis model for long-term monitoring.</p><p><b>Methods</b>: Evaluate and estimate inter-labeler inconsistencies by computing the limits-of-agreement; develop a model based on dynamic time warping and convolutional neural network to identify a valid stride and eliminate non-stride data in walking inertial data collected by a wearable device; Gait events are detected within a valid stride region afterwards; This method makes the subsequent data computation more efficient and robust.</p><p><b>Results</b>: The limits of inter-labeler agreement for key</p><p>gait events of heel off, toe off, heel strike, and flat foot are 72 ms, 16 ms, 22 ms, and 80 ms, respectively; The hybrid model's classification accuracy for a stride and a non-stride are 95.16% and 84.48%, respectively; The mean absolute error for detected heel off, toe off, heel strike, and flat foot are 24 ms, 5 ms, 9 ms, and 13 ms, respectively.</p><p><b>Conclusions</b>: The results show the inherent label uncertainty and the limits of human gait labeling of motion capture data; The proposed hybrid-model's performance is comparable to that of human labelers and it is a valid model to reliably detect strides in human gait data.</p><p><b>Significance</b>: This work establishes the foundation for fully automated human gait analysis systems with performances comparable to human-labelers.</p></div></div></div>


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4405
Author(s):  
Diego Guffanti ◽  
Alberto Brunete ◽  
Miguel Hernando ◽  
Javier Rueda ◽  
Enrique Navarro Cabello

Several studies have examined the accuracy of the Kinect V2 sensor during gait analysis. Usually the data retrieved by the Kinect V2 sensor are compared with the ground truth of certified systems using a Euclidean comparison. Due to the Kinect V2 sensor latency, the application of a uniform temporal alignment is not adequate to compare the signals. On that basis, the purpose of this study was to explore the abilities of the dynamic time warping (DTW) algorithm to compensate for sensor latency (3 samples or 90 ms) and develop a proper accuracy estimation. During the experimental stage, six iterations were performed using the a dual Kinect V2 system. The walking tests were developed at a self-selected speed. The sensor accuracy for Euclidean matching was consistent with that reported in previous studies. After latency compensation, the sensor accuracy demonstrated considerably lower error rates for all joints. This demonstrated that the accuracy was underestimated due to the use of inappropriate comparison techniques. On the contrary, DTW is a potential method that compensates for the sensor latency, and works sufficiently in comparison with certified systems.


2021 ◽  
Vol 17 (4) ◽  
pp. e1008935
Author(s):  
Jan Stenum ◽  
Cristina Rossi ◽  
Ryan T. Roemmich

Human gait analysis is often conducted in clinical and basic research, but many common approaches (e.g., three-dimensional motion capture, wearables) are expensive, immobile, data-limited, and require expertise. Recent advances in video-based pose estimation suggest potential for gait analysis using two-dimensional video collected from readily accessible devices (e.g., smartphones). To date, several studies have extracted features of human gait using markerless pose estimation. However, we currently lack evaluation of video-based approaches using a dataset of human gait for a wide range of gait parameters on a stride-by-stride basis and a workflow for performing gait analysis from video. Here, we compared spatiotemporal and sagittal kinematic gait parameters measured with OpenPose (open-source video-based human pose estimation) against simultaneously recorded three-dimensional motion capture from overground walking of healthy adults. When assessing all individual steps in the walking bouts, we observed mean absolute errors between motion capture and OpenPose of 0.02 s for temporal gait parameters (i.e., step time, stance time, swing time and double support time) and 0.049 m for step lengths. Accuracy improved when spatiotemporal gait parameters were calculated as individual participant mean values: mean absolute error was 0.01 s for temporal gait parameters and 0.018 m for step lengths. The greatest difference in gait speed between motion capture and OpenPose was less than 0.10 m s−1. Mean absolute error of sagittal plane hip, knee and ankle angles between motion capture and OpenPose were 4.0°, 5.6° and 7.4°. Our analysis workflow is freely available, involves minimal user input, and does not require prior gait analysis expertise. Finally, we offer suggestions and considerations for future applications of pose estimation for human gait analysis.


Author(s):  
Jan Stenum ◽  
Cristina Rossi ◽  
Ryan T. Roemmich

ABSTRACTWalking is the primary mode of human locomotion. Accordingly, people have been interested in studying human gait since at least the fourth century BC. Human gait analysis is now common in many fields of clinical and basic research, but gold standard approaches – e.g., three-dimensional motion capture, instrumented mats or footwear, and wearables – are often expensive, immobile, data-limited, and/or require specialized equipment or expertise for operation. Recent advances in video-based pose estimation have suggested exciting potential for analyzing human gait using only two-dimensional video inputs collected from readily accessible devices (e.g., smartphones, tablets). However, we currently lack: 1) data about the accuracy of video-based pose estimation approaches for human gait analysis relative to gold standard measurement techniques and 2) an available workflow for performing human gait analysis via video-based pose estimation. In this study, we compared a large set of spatiotemporal and sagittal kinematic gait parameters as measured by OpenPose (a freely available algorithm for video-based human pose estimation) and three-dimensional motion capture from trials where healthy adults walked overground. We found that OpenPose performed well in estimating many gait parameters (e.g., step time, step length, sagittal hip and knee angles) while some (e.g., double support time, sagittal ankle angles) were less accurate. We observed that mean values for individual participants – as are often of primary interest in clinical settings – were more accurate than individual step-by-step measurements. We also provide a workflow for users to perform their own gait analyses and offer suggestions and considerations for future approaches.


2017 ◽  
Vol 57 ◽  
pp. 241-242 ◽  
Author(s):  
Elise Klæbo Vonstad ◽  
Else Lervik ◽  
Tomas Holt ◽  
Mildrid Ljosland ◽  
Grethe Sandstrak ◽  
...  

Author(s):  
Ítalo Rodrigues ◽  
Jadiane Dionisio ◽  
Rogério Sales Gonçalves

Author(s):  
Grazia Cicirelli ◽  
Donato Impedovo ◽  
Vincenzo Dentamaro ◽  
Roberto Marani ◽  
Giuseppe Pirlo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document