scholarly journals A Determination Method for Gait Event Based on Acceleration Sensors

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5499 ◽  
Author(s):  
Chang Mei ◽  
Farong Gao ◽  
Ying Li

A gait event is a crucial step towards the effective assessment and rehabilitation of motor dysfunctions. However, for the data acquisition of a three-dimensional motion capture (3D Mo-Cap) system, the high cost of setups, such as the high standard laboratory environment, limits widespread clinical application. Inertial sensors are increasingly being used to recognize and classify physical activities in a variety of applications. Inertial sensors are now sufficiently small in size and light in weight to be part of a body sensor network for the collection of human gait data. The acceleration signal has found important applications in human gait recognition. In this paper, using the experimental data from the heel and toe, first the wavelet method was used to remove noise from the acceleration signal, then, based on the threshold of comprehensive change rate of the acceleration signal, the signal was primarily segmented. Subsequently, the vertical acceleration signals, from heel and toe, were integrated twice, to compute their respective vertical displacement. Four gait events were determined in the segmented signal, based on the characteristics of the vertical displacement of heel and toe. The results indicated that the gait events were consistent with the synchronous record of the motion capture system. The method has achieved gait event subdivision, while it has also ensured the accuracy of the defined gait events. The work acts as a valuable reference, to further study gait recognition.

Author(s):  
Pyeong-Gook Jung ◽  
Sehoon Oh ◽  
Gukchan Lim ◽  
Kyoungchul Kong

Motion capture systems play an important role in health-care and sport-training systems. In particular, there exists a great demand on a mobile motion capture system that enables people to monitor their health condition and to practice sport postures anywhere at any time. The motion capture systems with infrared or vision cameras, however, require a special setting, which hinders their application to a mobile system. In this paper, a mobile three-dimensional motion capture system is developed based on inertial sensors and smart shoes. Sensor signals are measured and processed by a mobile computer; thus, the proposed system enables the analysis and diagnosis of postures during outdoor sports, as well as indoor activities. The measured signals are transformed into quaternion to avoid the Gimbal lock effect. In order to improve the precision of the proposed motion capture system in an open and outdoor space, a frequency-adaptive sensor fusion method and a kinematic model are utilized to construct the whole body motion in real-time. The reference point is continuously updated by smart shoes that measure the ground reaction forces.


Proceedings ◽  
2020 ◽  
Vol 49 (1) ◽  
pp. 37
Author(s):  
Sam Gleadhill ◽  
Daniel James ◽  
Raymond Leadbetter ◽  
Tomohito Wada ◽  
Ryu Nagahara ◽  
...  

There are currently no evidence-based practical automated injury risk factor estimation tools to monitor low back compressive force in ambulatory or sporting environments. For this purpose, inertial sensors may potentially replace laboratory-based systems with comparable results. The objective was to investigate inertial sensor validity to monitor low back compression force. Thirty participants completed a series of lifting tasks from the floor. Back compression force was estimated using a hand calculated method, an inertial sensor method and a three-dimensional motion capture method. Results demonstrated that semi-automation with a sensor had a higher agreement with motion capture compared to the hand calculated method, with angle errors of less than six degrees and back compression force errors of less than 200 Newtons. It was concluded that inertial sensors are valid to implement for static low back compression force estimations.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5466 ◽  
Author(s):  
Xinrui Jiang ◽  
Ye Zhang ◽  
Qi Yang ◽  
Bin Deng ◽  
Hongqiang Wang

At present, there are two obvious problems in radar-based gait recognition. First, the traditional radar frequency band is difficult to meet the requirements of fine identification with due to its low carrier frequency and limited micro-Doppler resolution. Another significant problem is that radar signal processing is relatively complex, and the existing signal processing algorithms are poor in real-time usability, robustness and universality. This paper focuses on the two basic problems of human gait detection with radar and proposes a human gait classification and recognition method based on millimeter-wave array radar. Based on deep-learning technology, a multi-channel three-dimensional convolution neural network is proposed on the basis of improving the residual network, which completes the classification and recognition of human gait through the hierarchical extraction and fusion of multi-dimensional features. Taking the three-dimensional coordinates, motion speed and intensity of strong scattering points in the process of target motion as network inputs, multi-channel convolution is used to extract motion features, and the classification and recognition of typical daily actions are completed. The experimental results show that we have more than 92.5% recognition accuracy for common gait categories such as jogging and normal walking.


Author(s):  
Jan Stenum ◽  
Cristina Rossi ◽  
Ryan T. Roemmich

ABSTRACTWalking is the primary mode of human locomotion. Accordingly, people have been interested in studying human gait since at least the fourth century BC. Human gait analysis is now common in many fields of clinical and basic research, but gold standard approaches – e.g., three-dimensional motion capture, instrumented mats or footwear, and wearables – are often expensive, immobile, data-limited, and/or require specialized equipment or expertise for operation. Recent advances in video-based pose estimation have suggested exciting potential for analyzing human gait using only two-dimensional video inputs collected from readily accessible devices (e.g., smartphones, tablets). However, we currently lack: 1) data about the accuracy of video-based pose estimation approaches for human gait analysis relative to gold standard measurement techniques and 2) an available workflow for performing human gait analysis via video-based pose estimation. In this study, we compared a large set of spatiotemporal and sagittal kinematic gait parameters as measured by OpenPose (a freely available algorithm for video-based human pose estimation) and three-dimensional motion capture from trials where healthy adults walked overground. We found that OpenPose performed well in estimating many gait parameters (e.g., step time, step length, sagittal hip and knee angles) while some (e.g., double support time, sagittal ankle angles) were less accurate. We observed that mean values for individual participants – as are often of primary interest in clinical settings – were more accurate than individual step-by-step measurements. We also provide a workflow for users to perform their own gait analyses and offer suggestions and considerations for future approaches.


2021 ◽  
Vol 17 (4) ◽  
pp. e1008935
Author(s):  
Jan Stenum ◽  
Cristina Rossi ◽  
Ryan T. Roemmich

Human gait analysis is often conducted in clinical and basic research, but many common approaches (e.g., three-dimensional motion capture, wearables) are expensive, immobile, data-limited, and require expertise. Recent advances in video-based pose estimation suggest potential for gait analysis using two-dimensional video collected from readily accessible devices (e.g., smartphones). To date, several studies have extracted features of human gait using markerless pose estimation. However, we currently lack evaluation of video-based approaches using a dataset of human gait for a wide range of gait parameters on a stride-by-stride basis and a workflow for performing gait analysis from video. Here, we compared spatiotemporal and sagittal kinematic gait parameters measured with OpenPose (open-source video-based human pose estimation) against simultaneously recorded three-dimensional motion capture from overground walking of healthy adults. When assessing all individual steps in the walking bouts, we observed mean absolute errors between motion capture and OpenPose of 0.02 s for temporal gait parameters (i.e., step time, stance time, swing time and double support time) and 0.049 m for step lengths. Accuracy improved when spatiotemporal gait parameters were calculated as individual participant mean values: mean absolute error was 0.01 s for temporal gait parameters and 0.018 m for step lengths. The greatest difference in gait speed between motion capture and OpenPose was less than 0.10 m s−1. Mean absolute error of sagittal plane hip, knee and ankle angles between motion capture and OpenPose were 4.0°, 5.6° and 7.4°. Our analysis workflow is freely available, involves minimal user input, and does not require prior gait analysis expertise. Finally, we offer suggestions and considerations for future applications of pose estimation for human gait analysis.


2020 ◽  
Vol 64 (1-4) ◽  
pp. 701-709
Author(s):  
Shuxia Tian ◽  
Penghui Zhang ◽  
Liping Huang ◽  
Xueqian Song ◽  
Zhenmao Chen ◽  
...  

Hard-point detection is an important content of catenary detection. In this paper, the pantograph-catenary coupling model was established firstly. Then the vertical acceleration of pantograph during operation was calculated by using three-dimensional modeling software and finite element analysis software. The acceleration signal mixed with white noise was filtered by global default threshold, and the hard-point detection feature signal was obtained. Finally, the Hidden Markov Model corresponding to each state of the hard-point was obtained by using the characteristic signal, which verified the feasibility of the Hidden Markov Model for hard-point detection.


2016 ◽  
Vol 6 (2) ◽  
pp. 129-137 ◽  
Author(s):  
Michal Balazia ◽  
Konstantinos N. Plataniotis

Author(s):  
Tomasz Krzeszowski ◽  
Bogdan Kwolek ◽  
Agnieszka Michalczuk ◽  
Adam Świtoński ◽  
Henryk Josiński

2020 ◽  
Vol 98 ◽  
pp. 109429 ◽  
Author(s):  
Rubén Soussé ◽  
Jorge Verdú ◽  
Ricardo Jauregui ◽  
Ventura Ferrer-Roca ◽  
Simone Balocco

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3496
Author(s):  
Li Wang ◽  
Yajun Li ◽  
Fei Xiong ◽  
Wenyu Zhang

Human identification based on motion capture data has received signification attentions for its wide applications in authentication and surveillance systems. The optical motion capture system (OMCS) can dynamically capture the high-precision three-dimensional locations of optical trackers that are implemented on a human body, but its potential in applications on gait recognition has not been studied in existing works. On the other hand, a typical OMCS can only support one player one time, which limits its capability and efficiency. In this paper, our goals are investigating the performance of OMCS-based gait recognition performance, and realizing gait recognition in OMCS such that it can support multiple players at the same time. We develop a gait recognition method based on decision fusion, and it includes the following four steps: feature extraction, unreliable feature calibration, classification of single motion frame, and decision fusion of multiple motion frame. We use kernel extreme learning machine (KELM) for single motion classification, and in particular we propose a reliability weighted sum (RWS) decision fusion method to combine the fuzzy decisions of the motion frames. We demonstrate the performance of the proposed method by using walking gait data collected from 76 participants, and results show that KELM significantly outperforms support vector machine (SVM) and random forest in the single motion frame classification task, and demonstrate that the proposed RWS decision fusion rule can achieve better fusion accuracy compared with conventional fusion rules. Our results also show that, with 10 motion trackers that are implemented on lower body locations, the proposed method can achieve 100% validation accuracy with less than 50 gait motion frames.


Sign in / Sign up

Export Citation Format

Share Document