Motion Intention Recognition Based on Air Bladders

2021 ◽  
pp. 586-595
Author(s):  
Weifeng Wu ◽  
Chengqi Lin ◽  
Gengliang Lin ◽  
Siqi Cai ◽  
Longhan Xie
2015 ◽  
Vol 12 (4) ◽  
pp. 1257-1270 ◽  
Author(s):  
Jian Huang ◽  
Weiguang Huo ◽  
Wenxia Xu ◽  
Samer Mohammed ◽  
Yacine Amirat

2020 ◽  
Vol 75 ◽  
pp. 45-48 ◽  
Author(s):  
Zheng Wang ◽  
Yinfeng Fang ◽  
Dalin Zhou ◽  
Kairu Li ◽  
Christophe Cointet ◽  
...  

2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Li Zhang ◽  
Geng Liu ◽  
Bing Han ◽  
Zhe Wang ◽  
Tong Zhang

Human motion intention recognition is a key to achieve perfect human-machine coordination and wearing comfort of wearable robots. Surface electromyography (sEMG), as a bioelectrical signal, generates prior to the corresponding motion and reflects the human motion intention directly. Thus, a better human-machine interaction can be achieved by using sEMG based motion intention recognition. In this paper, we review and discuss the state of the art of the sEMG based motion intention recognition that is mainly used in detail. According to the method adopted, motion intention recognition is divided into two groups: sEMG-driven musculoskeletal (MS) model based motion intention recognition and machine learning (ML) model based motion intention recognition. The specific models and recognition effects of each study are analyzed and systematically compared. Finally, a discussion of the existing problems in the current studies, major advances, and future challenges is presented.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2176
Author(s):  
Lu Zhu ◽  
Zhuo Wang ◽  
Zhigang Ning ◽  
Yu Zhang ◽  
Yida Liu ◽  
...  

To solve the complexity of the traditional motion intention recognition method using a multi-mode sensor signal and the lag of the recognition process, in this paper, an inertial sensor-based motion intention recognition method for a soft exoskeleton is proposed. Compared with traditional motion recognition, in addition to the classic five kinds of terrain, the recognition of transformed terrain is also added. In the mode acquisition, the sensors’ data in the thigh and calf in different motion modes are collected. After a series of data preprocessing, such as data filtering and normalization, the sliding window is used to enhance the data, so that each frame of inertial measurement unit (IMU) data keeps the last half of the previous frame’s historical information. Finally, we designed a deep convolution neural network which can learn to extract discriminant features from temporal gait period to classify different terrain. The experimental results show that the proposed method can recognize the pose of the soft exoskeleton in different terrain, including walking on flat ground, going up and downstairs, and up and down slopes. The recognition accuracy rate can reach 97.64%. In addition, the recognition delay of the conversion pattern, which is converted between the five modes, only accounts for 23.97% of a gait cycle. Finally, the oxygen consumption was measured by the wearable metabolic system (COSMED K5, The Metabolic Company, Rome, Italy), and compared with that without an identification method; the net metabolism was reduced by 5.79%. The method in this paper can greatly improve the control performance of the flexible lower extremity exoskeleton system and realize the natural and seamless state switching of the exoskeleton between multiple motion modes according to the human motion intention.


Author(s):  
Ting Wang ◽  
Jingna Mao ◽  
Ruozhou Xiao ◽  
Wuqi Wang ◽  
Guangxin Ding ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Mofei Wen ◽  
Yuwei Wang

With the development of microelectronic technology and computer systems, the research of motion intention recognition based on multimodal sensors has attracted the attention of the academic community. Deep learning and other nonlinear neural network models have a wide range of applications in big data sets. We propose a motion intention recognition algorithm based on multimodal long-term and short-term spatiotemporal feature fusion. We divide the target data into multiple segments and use a three-dimensional convolutional neural network to extract the short-term spatiotemporal features. The three types of features of the same segment are fused together and input into the LSTM network for time-series modeling to further fuse the features to obtain multimodal long-term spatiotemporal features with higher discrimination. According to the lower limb movement pattern recognition model, the minimum number of muscles and EMG signal characteristics required to accurately recognize the movement state of the lower limbs are determined. This minimizes the redundant calculation cost of the model and ensures the real-time output of the system results.


Author(s):  
Felipe Antonio Sulez Gomez ◽  
Diego Enrique Guzman Villamarin ◽  
William Alejandro Ruiz Guacheta ◽  
Jeronimo Londono Prieto ◽  
Juan Pablo Diago Rodriguez

Sign in / Sign up

Export Citation Format

Share Document