scholarly journals A Novel Motion Intention Recognition Approach for Soft Exoskeleton via IMU

Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2176
Author(s):  
Lu Zhu ◽  
Zhuo Wang ◽  
Zhigang Ning ◽  
Yu Zhang ◽  
Yida Liu ◽  
...  

To solve the complexity of the traditional motion intention recognition method using a multi-mode sensor signal and the lag of the recognition process, in this paper, an inertial sensor-based motion intention recognition method for a soft exoskeleton is proposed. Compared with traditional motion recognition, in addition to the classic five kinds of terrain, the recognition of transformed terrain is also added. In the mode acquisition, the sensors’ data in the thigh and calf in different motion modes are collected. After a series of data preprocessing, such as data filtering and normalization, the sliding window is used to enhance the data, so that each frame of inertial measurement unit (IMU) data keeps the last half of the previous frame’s historical information. Finally, we designed a deep convolution neural network which can learn to extract discriminant features from temporal gait period to classify different terrain. The experimental results show that the proposed method can recognize the pose of the soft exoskeleton in different terrain, including walking on flat ground, going up and downstairs, and up and down slopes. The recognition accuracy rate can reach 97.64%. In addition, the recognition delay of the conversion pattern, which is converted between the five modes, only accounts for 23.97% of a gait cycle. Finally, the oxygen consumption was measured by the wearable metabolic system (COSMED K5, The Metabolic Company, Rome, Italy), and compared with that without an identification method; the net metabolism was reduced by 5.79%. The method in this paper can greatly improve the control performance of the flexible lower extremity exoskeleton system and realize the natural and seamless state switching of the exoskeleton between multiple motion modes according to the human motion intention.

2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Li Zhang ◽  
Geng Liu ◽  
Bing Han ◽  
Zhe Wang ◽  
Tong Zhang

Human motion intention recognition is a key to achieve perfect human-machine coordination and wearing comfort of wearable robots. Surface electromyography (sEMG), as a bioelectrical signal, generates prior to the corresponding motion and reflects the human motion intention directly. Thus, a better human-machine interaction can be achieved by using sEMG based motion intention recognition. In this paper, we review and discuss the state of the art of the sEMG based motion intention recognition that is mainly used in detail. According to the method adopted, motion intention recognition is divided into two groups: sEMG-driven musculoskeletal (MS) model based motion intention recognition and machine learning (ML) model based motion intention recognition. The specific models and recognition effects of each study are analyzed and systematically compared. Finally, a discussion of the existing problems in the current studies, major advances, and future challenges is presented.


2021 ◽  
Vol 3 (1) ◽  
pp. 37-47
Author(s):  
Baixin Sun ◽  
Guang Cheng ◽  
Quanmin Dai ◽  
Tianlin Chen ◽  
Weifeng Liu ◽  
...  

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2563 ◽  
Author(s):  
Eric Frick ◽  
Salam Rahmatalla

Human motion capture is driven by joint center location estimates, and error in their estimation can be compounded by subsequent kinematic calculations. Soft tissue artifact (STA), the motion of tissue relative to the underlying bones, is a primary cause of error in joint center calculations. A method for mitigating the effects of STA, single-frame optimization (SFO), was introduced and numerically verified in Part 1 of this work, and the purpose of this article (Part 2) is to experimentally compare the results of SFO with a marker-based solution. The experimentation herein employed a single-degree-of-freedom pendulum to simulate human joint motion, and the effects of STA were simulated by affixing the inertial measurement unit to the pendulum indirectly through raw, vacuum-sealed meat. The inertial sensor was outfitted with an optical marker adapter so that its location could be optically determined by a camera-based motion-capture system. During the motion, inertial effects and non-rigid attachment of the inertial sensor caused the simulated STA to manifest via unrestricted motion (six degrees of freedom) relative to the rigid pendulum. The redundant inertial and optical instrumentation allowed a time-varying joint center solution to be determined both by optical markers and by SFO, allowing for comparison. The experimental results suggest that SFO can achieve accuracy comparable to that of state-of-the-art joint center determination methods that use optical skin markers (root mean square error of 7.87–37.86 mm), and that the time variances of the SFO solutions are correlated (r =  0.58–0.99) with the true, time-varying joint center solutions. This suggests that SFO could potentially help to fill a gap in the existing literature by improving the characterization and mitigation of STA in human motion capture.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Chunjing Tao ◽  
Qingyang Yan ◽  
Yitong Li

A hierarchical shared-control method of the walking-aid robot for both human motion intention recognition and the obstacle emergency-avoidance method based on artificial potential field (APF) is proposed in this paper. The human motion intention is obtained from the interaction force measurements of the sensory system composed of 4 force-sensing registers (FSR) and a torque sensor. Meanwhile, a laser-range finder (LRF) forward is applied to detect the obstacles and try to guide the operator based on the repulsion force calculated by artificial potential field. An obstacle emergency-avoidance method which comprises different control strategies is also assumed according to the different states of obstacles or emergency cases. To ensure the user’s safety, the hierarchical shared-control method combines the intention recognition method with the obstacle emergency-avoidance method based on the distance between the walking-aid robot and the obstacles. At last, experiments validate the effectiveness of the proposed hierarchical shared-control method.


2021 ◽  
Vol 11 (4) ◽  
pp. 1902
Author(s):  
Liqiang Zhang ◽  
Yu Liu ◽  
Jinglin Sun

Pedestrian navigation systems could serve as a good supplement for other navigation methods or for extending navigation into areas where other navigation systems are invalid. Due to the accumulation of inertial sensing errors, foot-mounted inertial-sensor-based pedestrian navigation systems (PNSs) suffer from drift, especially heading drift. To mitigate heading drift, considering the complexity of human motion and the environment, we introduce a novel hybrid framework that integrates a foot-state classifier that triggers the zero-velocity update (ZUPT) algorithm, zero-angular-rate update (ZARU) algorithm, and a state lock, a magnetic disturbance detector, a human-motion-classifier-aided adaptive fusion module (AFM) that outputs an adaptive heading error measurement by fusing heuristic and magnetic algorithms rather than simply switching them, and an error-state Kalman filter (ESKF) that estimates the optimal systematic error. The validation datasets include a Vicon loop dataset that spans 324.3 m in a single room for approximately 300 s and challenging walking datasets that cover large indoor and outdoor environments with a total distance of 12.98 km. A total of five different frameworks with different heading drift correction methods, including the proposed framework, were validated on these datasets, which demonstrated that our proposed ZUPT–ZARU–AFM–ESKF-aided PNS outperforms other frameworks and clearly mitigates heading drift.


Sign in / Sign up

Export Citation Format

Share Document