scholarly journals Real-Time and Offline Evaluation of Myoelectric Pattern Recognition for the Decoding of Hand Movements

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5677
Author(s):  
Sara Abbaspour ◽  
Autumn Naber ◽  
Max Ortiz-Catalan ◽  
Hamid GholamHosseini ◽  
Maria Lindén

Pattern recognition algorithms have been widely used to map surface electromyographic signals to target movements as a source for prosthetic control. However, most investigations have been conducted offline by performing the analysis on pre-recorded datasets. While real-time data analysis (i.e., classification when new data becomes available, with limits on latency under 200–300 milliseconds) plays an important role in the control of prosthetics, less knowledge has been gained with respect to real-time performance. Recent literature has underscored the differences between offline classification accuracy, the most common performance metric, and the usability of upper limb prostheses. Therefore, a comparative offline and real-time performance analysis between common algorithms had yet to be performed. In this study, we investigated the offline and real-time performance of nine different classification algorithms, decoding ten individual hand and wrist movements. Surface myoelectric signals were recorded from fifteen able-bodied subjects while performing the ten movements. The offline decoding demonstrated that linear discriminant analysis (LDA) and maximum likelihood estimation (MLE) significantly (p < 0.05) outperformed other classifiers, with an average classification accuracy of above 97%. On the other hand, the real-time investigation revealed that, in addition to the LDA and MLE, multilayer perceptron also outperformed the other algorithms and achieved a classification accuracy and completion rate of above 68% and 69%, respectively.

2020 ◽  
Author(s):  
Sara Abbaspour ◽  
Autumn Naber ◽  
Max Ortiz-Catalan ◽  
Hamid Gholamhosseini ◽  
Maria Lindén

<p><a></a><a>Pattern recognition algorithms have been widely used to map surface electromyographic signals to target movements as a source for prosthetic control. Recent literature has underscored differences between offline classification accuracy, the most common performance metric, and the usability of upper limb prostheses. Since the majority of investigations on pattern recognition algorithms have been conducted offline by performing the analysis on pre-recorded datasets, less knowledge has been gained with respect to real-time performance (i.e., classification when new data becomes available with limits on latency under 200-300 milliseconds). </a>Therefore, a comparative offline and real-time performance analysis between common algorithms had yet to be performed. In this study, we investigated the offline and real-time performance of nine different classification algorithms decoding ten individual hand and wrist movements. Surface myoelectric signals were recorded from the dominant forearm of fifteen able-bodied subjects while performing the ten movements. The offline decoding demonstrated that Linear discriminant analysis (LDA) and maximum likelihood estimation (MLE) significantly (p<0.05) outperformed other classifiers with an average classification accuracy of above 97%. The real-time investigation revealed that in addition to the LDA and MLE, multilayer perceptron also outperformed the other algorithms in classification accuracy (above 68%) and completion rate (above 69%).</p>


2020 ◽  
Author(s):  
Sara Abbaspour ◽  
Autumn Naber ◽  
Max Ortiz-Catalan ◽  
Hamid Gholamhosseini ◽  
Maria Lindén

<p><a></a><a>Pattern recognition algorithms have been widely used to map surface electromyographic signals to target movements as a source for prosthetic control. Recent literature has underscored differences between offline classification accuracy, the most common performance metric, and the usability of upper limb prostheses. Since the majority of investigations on pattern recognition algorithms have been conducted offline by performing the analysis on pre-recorded datasets, less knowledge has been gained with respect to real-time performance (i.e., classification when new data becomes available with limits on latency under 200-300 milliseconds). </a>Therefore, a comparative offline and real-time performance analysis between common algorithms had yet to be performed. In this study, we investigated the offline and real-time performance of nine different classification algorithms decoding ten individual hand and wrist movements. Surface myoelectric signals were recorded from the dominant forearm of fifteen able-bodied subjects while performing the ten movements. The offline decoding demonstrated that Linear discriminant analysis (LDA) and maximum likelihood estimation (MLE) significantly (p<0.05) outperformed other classifiers with an average classification accuracy of above 97%. The real-time investigation revealed that in addition to the LDA and MLE, multilayer perceptron also outperformed the other algorithms in classification accuracy (above 68%) and completion rate (above 69%).</p>


2020 ◽  
Vol 53 (5-6) ◽  
pp. 824-832
Author(s):  
Hao Li ◽  
Xia Mao ◽  
Lijiang Chen

Electroencephalogram data are easily affected by artifacts, and a drift may occur during the signal acquisition process. At present, most research focuses on the automatic detection and elimination of artifacts in electrooculograms, electromyograms and electrocardiograms. However, electroencephalogram drift data, which affect the real-time performance, are mainly manually calibrated and abandoned. An emotion classification method based on 1/f fluctuation theory is proposed to classify electroencephalogram data without removing artifacts and drift data. The results show that the proposed method can still achieve a great classification accuracy of 75% in cases in which artifacts and drift data exist when using the support vector machine classifier. In addition, the real-time performance of the proposed method is guaranteed.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2402 ◽  
Author(s):  
Ali Al-Timemy ◽  
Guido Bugmann ◽  
Javier Escudero

Electromyogram (EMG)-based Pattern Recognition (PR) systems for upper-limb prosthesis control provide promising ways to enable an intuitive control of the prostheses with multiple degrees of freedom and fast reaction times. However, the lack of robustness of the PR systems may limit their usability. In this paper, a novel adaptive time windowing framework is proposed to enhance the performance of the PR systems by focusing on their windowing and classification steps. The proposed framework estimates the output probabilities of each class and outputs a movement only if a decision with a probability above a certain threshold is achieved. Otherwise (i.e., all probability values are below the threshold), the window size of the EMG signal increases. We demonstrate our framework utilizing EMG datasets collected from nine transradial amputees who performed nine movement classes with Time Domain Power Spectral Descriptors (TD-PSD), Wavelet and Time Domain (TD) feature extraction (FE) methods and a Linear Discriminant Analysis (LDA) classifier. Nonetheless, the concept can be applied to other types of features and classifiers. In addition, the proposed framework is validated with different movement and EMG channel combinations. The results indicate that the proposed framework works well with different FE methods and movement/channel combinations with classification error rates of approximately 13% with TD-PSD FE. Thus, we expect our proposed framework to be a straightforward, yet important, step towards the improvement of the control methods for upper-limb prostheses.


Author(s):  
Alexander E. Olsson ◽  
Nebojša Malešević ◽  
Anders Björkman ◽  
Christian Antfolk

Abstract Background Processing the surface electromyogram (sEMG) to decode movement intent is a promising approach for natural control of upper extremity prostheses. To this end, this paper introduces and evaluates a new framework which allows for simultaneous and proportional myoelectric control over multiple degrees of freedom (DoFs) in real-time. The framework uses multitask neural networks and domain-informed regularization in order to automatically find nonlinear mappings from the forearm sEMG envelope to multivariate and continuous encodings of concurrent hand- and wrist kinematics, despite only requiring categorical movement instruction stimuli signals for calibration. Methods Forearm sEMG with 8 channels was collected from healthy human subjects (N = 20) and used to calibrate two myoelectric control interfaces, each with two output DoFs. The interfaces were built from (I) the proposed framework, termed Myoelectric Representation Learning (MRL), and, to allow for comparisons, from (II) a standard pattern recognition framework based on Linear Discriminant Analysis (LDA). The online performances of both interfaces were assessed with a Fitts’s law type test generating 5 quantitative performance metrics. The temporal stabilities of the interfaces were evaluated by conducting identical tests without recalibration 7 days after the initial experiment session. Results Metric-wise two-way repeated measures ANOVA with factors method (MRL vs LDA) and session (day 1 vs day 7) revealed a significant ($$p<0.05$$ p < 0.05 ) advantage for MRL over LDA in 5 out of 5 performance metrics, with metric-wise effect sizes (Cohen’s $$d$$ d ) separating MRL from LDA ranging from $$\left|d\right|=0.62$$ d = 0.62 to $$\left|d\right|=1.13$$ d = 1.13 . No significant effect on any metric was detected for neither session nor interaction between method and session, indicating that none of the methods deteriorated significantly in control efficacy during one week of intermission. Conclusions The results suggest that MRL is able to successfully generate stable mappings from EMG to kinematics, thereby enabling myoelectric control with real-time performance superior to that of the current commercial standard for pattern recognition (as represented by LDA). It is thus postulated that the presented MRL approach can be of practical utility for muscle-computer interfaces.


2021 ◽  
Author(s):  
Haiqiang Duan ◽  
Chenyun Dai ◽  
Wei Chen

Abstract Background: The transmission of human body movements to other devices through wearable smart bracelets have attracted more and more attentions in the field of human-machine interface (HMI) applications. However, due to the limitation of the collection range of wearable bracelets, it is necessary to study the relationship between the superposition of wrist and finger motion and their cooperative motion to simplify the collection system of the device.Methods: The multi-channel high-density surface electromyogram (HD-sEMG) signal has high spatial resolution and can improve the accuracy of multi-channel fitting. In this study, we quantified the HD-sEMG forearm spatial activation features of 256 channels of hand movement, and performed a linear fitting of the quantified features of fingers and wrist movements to verify the linear superposition relationship between fingers and wrist cooperative movements and their independent movements. The most important thing is to classify and predict the results of the fitting and the actual measured fingers and wrist cooperative actions by four commonly used classifiers: Linear Discriminant Analysis (LDA) ,K-Nearest Neighbor (KNN) ,Support Vector Machine (SVM) and Random Forest (RF), and evaluate the performance of the four classifiers in gesture fitting in detail according to the classification results.Results: In a total of 12 kinds of synthetic gesture actions, in the three cases where the number of fitting channels was selected as 8, 32 and 64, four classifiers of LDA, SVM, RF and KNN are used for classification prediction. When the number of fitting channels was 8, the prediction accuracy of LDA classifier was 99.70%, the classification accuracy of KNN was 99.40%, the classification accuracy of SVM was 99.20%, and the classification accuracy of RF was 93.75%. When the number of fitting channels was 32, the accuracy of LDA was 98.51%, the classification accuracy of KNN was 97.92%, the accuracy of SVM is 96.73%, and the accuracy of RF was 86.61%. When the number of fitting channels is 64, the accuracy of LDA is 95.83%, the classification accuracy of KNN is 91.67%, the accuracy of SVM is 86.90%, and the accuracy of RF is 83.30%.Conclusion: It can be seen from the results that when the number of fitting channels is 8, the classification accuracy of the three classifiers of LDA, KNN and SVM is basically the same, but the time-consuming of SVM is very small. When the amount of data is large, the priority should be selected SVM as the classifier. When the number of fitting channels increases, the classification accuracy of the LDA classifier will be higher than the other three classifiers, so the LDA classifier should be more appropriate. The classification accuracy of the RF classifier in this type of problem has always been far lower than the other three classifiers, so it is not recommended to use the RF classifier as a classifier for gesture stacking related work.


Sign in / Sign up

Export Citation Format

Share Document