movement recognition
Recently Published Documents


TOTAL DOCUMENTS

211
(FIVE YEARS 92)

H-INDEX

14
(FIVE YEARS 4)

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wentao Wei ◽  
Xuhui Hu ◽  
Hua Liu ◽  
Ming Zhou ◽  
Yan Song

As a machine-learning-driven decision-making problem, the surface electromyography (sEMG)-based hand movement recognition is one of the key issues in robust control of noninvasive neural interfaces such as myoelectric prosthesis and rehabilitation robot. Despite the recent success in sEMG-based hand movement recognition using end-to-end deep feature learning technologies based on deep learning models, the performance of today’s sEMG-based hand movement recognition system is still limited by the noisy, random, and nonstationary nature of sEMG signals and researchers have come up with a number of methods that improve sEMG-based hand movement via feature engineering. Aiming at achieving higher sEMG-based hand movement recognition accuracies while enabling a trade-off between performance and computational complexity, this study proposed a progressive fusion network (PFNet) framework, which improves sEMG-based hand movement recognition via integration of domain knowledge-guided feature engineering and deep feature learning. In particular, it learns high-level feature representations from raw sEMG signals and engineered time-frequency domain features via a feature learning network and a domain knowledge network, respectively, and then employs a 3-stage progressive fusion strategy to progressively fuse the two networks together and obtain the final decisions. Extensive experiments were conducted on five sEMG datasets to evaluate our proposed PFNet, and the experimental results showed that the proposed PFNet could achieve the average hand movement recognition accuracies of 87.8%, 85.4%, 68.3%, 71.7%, and 90.3% on the five datasets, respectively, which outperformed those achieved by the state of the arts.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hua Zhao ◽  
Aibo Wang ◽  
Ying Fan

A deep learning approach is used in this study to provide insight into aerobics movement recognition, and the model is used for aerobics movement recognition. The model complexity is significantly reduced, while the multi-scale features of the target at the fine-grained level are extracted, significantly improving the characterization of the target, by embedding lightweight multi-scale convolution modules in 3D convolutional residual networks to increase the local perceptual field range in each layer of the network. Finally, using the channel attention mechanism, the key features are extracted from the multi-scale features. To create a dual-speed frame rate detection model, the fast-slow combination idea is fused into a 3D convolutional network. To obtain spatial semantic information and motion information in the video, the model uses different frame rates, and the two-channel information is fused with features using lateral concatenation. Following the acquisition of all features, the features are fed into a temporal detection network to identify temporal actions and to design a behavior recognition system for the network model to demonstrate the network model's applicability. The average scores of students in the experimental group were significantly higher than those in the control group in seven areas: set accuracy, movement amplitude, movement strength, body coordination, coordination of movement and music, movement expression, and aesthetics; the average scores of movement proficiency and body control in the experimental group were also significantly higher than those in the control group, but the differences were not significant. The differences between the eight indicators in the experimental group were not significant when compared to those in the preexperimental group, indicating that intensive rhythm training for students improves secondary school students' comprehension, proficiency, and presentation of aerobics sets.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Wang Lu ◽  
JiangYuan Hou

Current methods of human body movement recognition neglect the depth denoising and edge restoration of movement image, which leads to great error in athletes’ wrong movement recognition and poor application intelligence. Therefore, an intelligent recognition method based on image vision for sports athletes’ wrong actions is proposed. The basic principle, structure, and 3D application of computer image vision technology are defined. Capturing the human body image and point cloud data, the three-dimensional dynamic model of sports athletes action is constructed. The color camera including CCD sensor and CMOS sensor is selected to collect the wrong movement image of athlete and provide image data for the recognition of wrong movement. Wavelet transform coefficient and quantization matrix threshold are introduced to denoise the wrong motion images of athletes. Based on this, the feature of sports athlete’s motion contour image is extracted in spatial frequency domain, and the edge of the image is further recovered by Canny operator. Experimental results show that the proposed method can accurately identify the wrong movements of athletes, and there is no redundancy in the recognition results. Image denoising effect is good and less time-consuming and can provide a reliable basis for related fields.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hui Wang

The use of computer vision for target detection and recognition has been an interesting and challenging area of research for the past three decades. Professional athletes and sports enthusiasts in general can be trained with appropriate systems for corrective training and assistive training. Such a need has motivated researchers to combine artificial intelligence with the field of sports to conduct research. In this paper, we propose a Mask Region-Convolutional Neural Network (MR-CNN)- based method for yoga movement recognition based on the image task of yoga movement recognition. The improved MR-CNN model is based on the framework and structure of the region-convolutional network, which proposes a certain number of candidate regions for the image by feature extraction and classifies them, then outputs these regions as detected bounding boxes, and does mask prediction for the candidate regions using segmentation branches. The improved MR-CNN model uses an improved deep residual network as the backbone network for feature extraction, bilinear interpolation of the extracted candidate regions using Region of Interest (RoI) Align, followed by target classification and detection, and segmentation of the image using the segmentation branch. The model improves the convolution part in the segmentation branch by replacing the original standard convolution with a depth-separable convolution to improve the network efficiency. Experimentally constructed polygon-labeled datasets are simulated using the algorithm. The deepening of the network and the use of depth-separable network improve the accuracy of detection while maintaining the reliability of the network and validate the effectiveness of the improved MR-CNN.


2021 ◽  
Vol 210 ◽  
pp. 106377
Author(s):  
Mostefa Mesbah ◽  
Mohamed S. Khlif ◽  
Siamak Layeghy ◽  
Christine E. East ◽  
Shiying Dong ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6528
Author(s):  
Chen Bai ◽  
Yu-Peng Chen ◽  
Adam Wolach ◽  
Lisa Anthony ◽  
Mamoun T. Mardini

Frequent spontaneous facial self-touches, predominantly during outbreaks, have the theoretical potential to be a mechanism of contracting and transmitting diseases. Despite the recent advent of vaccines, behavioral approaches remain an integral part of reducing the spread of COVID-19 and other respiratory illnesses. The aim of this study was to utilize the functionality and the spread of smartwatches to develop a smartwatch application to identify motion signatures that are mapped accurately to face touching. Participants (n = 10, five women, aged 20–83) performed 10 physical activities classified into face touching (FT) and non-face touching (NFT) categories in a standardized laboratory setting. We developed a smartwatch application on Samsung Galaxy Watch to collect raw accelerometer data from participants. Data features were extracted from consecutive non-overlapping windows varying from 2 to 16 s. We examined the performance of state-of-the-art machine learning methods on face-touching movement recognition (FT vs. NFT) and individual activity recognition (IAR): logistic regression, support vector machine, decision trees, and random forest. While all machine learning models were accurate in recognizing FT categories, logistic regression achieved the best performance across all metrics (accuracy: 0.93 ± 0.08, recall: 0.89 ± 0.16, precision: 0.93 ± 0.08, F1-score: 0.90 ± 0.11, AUC: 0.95 ± 0.07) at the window size of 5 s. IAR models resulted in lower performance, where the random forest classifier achieved the best performance across all metrics (accuracy: 0.70 ± 0.14, recall: 0.70 ± 0.14, precision: 0.70 ± 0.16, F1-score: 0.67 ± 0.15) at the window size of 9 s. In conclusion, wearable devices, powered by machine learning, are effective in detecting facial touches. This is highly significant during respiratory infection outbreaks as it has the potential to limit face touching as a transmission vector.


Sign in / Sign up

Export Citation Format

Share Document