Prediction of Human Motion with Motion Optimization and Neural Networks

Author(s):  
Juxing Wang ◽  
Zaojun Fang ◽  
Linyong Shen ◽  
Chen He
2020 ◽  
Vol 10 (10) ◽  
pp. 3358 ◽  
Author(s):  
Jiyuan Song ◽  
Aibin Zhu ◽  
Yao Tu ◽  
Hu Huang ◽  
Muhammad Affan Arif ◽  
...  

In response to the need for an exoskeleton to quickly identify the wearer’s movement mode in the mixed control mode, this paper studies the impact of different feature parameters of the surface electromyography (sEMG) signal on the accuracy of human motion pattern recognition using multilayer perceptrons and long short-term memory (LSTM) neural networks. The sEMG signals are extracted from the seven common human motion patterns in daily life, and the time domain and frequency domain features are extracted to build a feature parameter dataset for training the classifier. Recognition of human lower extremity movement patterns based on multilayer perceptrons and the LSTM neural network were carried out, and the final recognition accuracy rates of different feature parameters and different classifier model parameters were compared in the process of establishing the dataset. The experimental results show that the best accuracy rate of human motion pattern recognition using multilayer perceptrons is 95.53%, and the best accuracy rate of human motion pattern recognition using the LSTM neural network is 96.57%.


2021 ◽  
Author(s):  
Md Sanzid Bin Hossain ◽  
Joseph Drantez ◽  
Hwan Choi ◽  
Zhishan Guo

<div>Measurement of human body movement is an essential step in biomechanical analysis. The current standard for human motion capture systems uses infrared cameras to track reflective markers placed on the subject. While these systems can accurately track joint kinematics, the analyses are spatially limited to the lab environment. Though Inertial Measurement Unit (IMU) can eliminate the spatial limitations of the motion capture system, those systems are impractical for use in daily living due to the need for many sensors, typically one per body segment. Due to the need for practical and accurate estimation of joint kinematics, this study implements a reduced number of IMU sensors and employs machine learning algorithm to map sensor data to joint angles. Our developed algorithm estimates hip, knee, and ankle angles in the sagittal plane using two shoe-mounted IMU sensors in different practical walking conditions: treadmill, level overground, stair, and slope conditions. Specifically, we proposed five deep learning networks that use combinations of Convolutional Neural Networks (CNN) and Gated Recurrent Unit (GRU) based Recurrent Neural Networks (RNN) as base learners for our framework. Using those five baseline models, we proposed a novel framework, DeepBBWAE-Net, that implements ensemble techniques such as bagging, boosting, and weighted averaging to improve kinematic predictions. DeepBBWAE-Net predicts joint kinematics for the three joint angles under all the walking conditions with a Root Mean Square Error (RMSE) 6.93-29.0% lower than base models individually. This is the first study that uses a reduced number of IMU sensors to estimate kinematics in multiple walking environments.</div>


Sign in / Sign up

Export Citation Format

Share Document