Multiple gait phase recognition using boosted classifiers based on sEMG signal and classification matrix

Author(s):  
Jae-Hwan Ryu ◽  
Deok-Hwan Kim
2021 ◽  
Vol 70 ◽  
pp. 1-12
Author(s):  
Meiyan Zhang ◽  
Qisong Wang ◽  
Dan Liu ◽  
Boqi Zhao ◽  
Jiaze Tang ◽  
...  

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Chunjie Chen ◽  
Xinyu Wu ◽  
Du-xin Liu ◽  
Wei Feng ◽  
Can Wang

The wearable full-body exoskeleton robot developed in this study is one application of mobile cyberphysical system (CPS), which is a complex mobile system integrating mechanics, electronics, computer science, and artificial intelligence. Steel wire was used as the flexible transmission medium and a group of special wire-locking structures was designed. Additionally, we designed passive joints for partial joints of the exoskeleton. Finally, we proposed a novel gait phase recognition method for full-body exoskeletons using only joint angular sensors, plantar pressure sensors, and inclination sensors. The method consists of four procedures. Firstly, we classified the three types of main motion patterns: normal walking on the ground, stair-climbing and stair-descending, and sit-to-stand movement. Secondly, we segregated the experimental data into one gait cycle. Thirdly, we divided one gait cycle into eight gait phases. Finally, we built a gait phase recognition model based on k-Nearest Neighbor perception and trained it with the phase-labeled gait data. The experimental result shows that the model has a 98.52% average correct rate of classification of the main motion patterns on the testing set and a 95.32% average correct rate of phase recognition on the testing set. So the exoskeleton robot can achieve human motion intention in real time and coordinate its movement with the wearer.


2018 ◽  
Vol 47 (1) ◽  
pp. 223-230 ◽  
Author(s):  
Peng-na Wei ◽  
Rongfu Xie ◽  
Rongnian Tang ◽  
Chuang Li ◽  
Janis Kim ◽  
...  

Biosensors ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. 109
Author(s):  
Binbin Su ◽  
Christian Smith ◽  
Elena Gutierrez Farewik

Gait phase recognition is of great importance in the development of assistance-as-needed robotic devices, such as exoskeletons. In order for a powered exoskeleton with phase-based control to determine and provide proper assistance to the wearer during gait, the user’s current gait phase must first be identified accurately. Gait phase recognition can potentially be achieved through input from wearable sensors. Deep convolutional neural networks (DCNN) is a machine learning approach that is widely used in image recognition. User kinematics, measured from inertial measurement unit (IMU) output, can be considered as an ‘image’ since it exhibits some local ‘spatial’ pattern when the sensor data is arranged in sequence. We propose a specialized DCNN to distinguish five phases in a gait cycle, based on IMU data and classified with foot switch information. The DCNN showed approximately 97% accuracy during an offline evaluation of gait phase recognition. Accuracy was highest in the swing phase and lowest in terminal stance.


Robotica ◽  
2019 ◽  
Vol 37 (12) ◽  
pp. 2195-2208 ◽  
Author(s):  
Yu Lou ◽  
Rongli Wang ◽  
Jingeng Mai ◽  
Ninghua Wang ◽  
Qining Wang

SummaryUsing wearable robots is an effective means of rehabilitation for stroke survivors, and effective recognition of human motion intentions is a key premise in controlling wearable robots. In this paper, we propose an inertial measurement unit (IMU)-based gait phase detection system. The system consists of two IMUs that are tied on the thigh and on the shank, respectively, for collecting acceleration and angular velocity. Features were extracted using a sliding window of 150 ms in length, which was then fed into a quadratic discriminant analysis (QDA) classifier for classification. We recruited five stroke survivors to test our system. They walked at their own preferred speed on the level ground. Experimental results show that our proposed system has the ability of recognizing the gait phase of stroke survivors. All recognition accuracy results are above 96.5%, and detections are about 5–15 ms in advance of time. In addition, using only one IMU can also give reliable recognition results. This paper proposes an idea about the further research on human–computer interaction for the control of wearable robots.


Sign in / Sign up

Export Citation Format

Share Document