EMG Pattern Recognition Using Convolutional Neural Network with Different Scale Signal/Spectra Input

2019 ◽  
Vol 16 (04) ◽  
pp. 1950013 ◽  
Author(s):  
Wei Yang ◽  
Dapeng Yang ◽  
Yu Liu ◽  
Hong Liu

Deep learning (DL) has made tremendous contributions to image processing. Recently, the DL has also attracted attention in the specialized field of neural decoding from raw myoelectric signals (electromyograms, EMGs). However, to our knowledge, most existing methods require some measure of preprocessing of the raw EMGs. Moreover, research to date has not accounted for the variability in the signal during time sequences. In this paper, we propose a new convolutional neural network (CNN) structure that can directly process raw EMG signals for hand gesture classification. More specifically, we assess the effects of various window sizes and of two different EMG representations (time sequence and frequency spectra) on open EMG datasets. We found that the frequency spectra derived from raw EMGs is more suitable as the model input in the task of gesture classification. Meanwhile, the combination use of long window could improve the classification accuracy (CA) and the window of 1024 ms achieved the best results on two open datasets ([Formula: see text]% and [Formula: see text]%). Further, our model requires no feature extraction procedures and is comparable with the optimal combination of features and classifier used by the traditional methods in the performance of specific tasks.

Author(s):  
Too Jing Wei ◽  
Abdul Rahim Bin Abdullah ◽  
Norhashimah Binti Mohd Saad ◽  
Nursabillilah Binti Mohd Ali ◽  
Tengku Nor Shuhada Binti Tengku Zawawi

In this paper, the performance of featureless EMG pattern recognition in classifying hand and wrist movements are presented. The time-frequency distribution (TFD), spectrogram is employed to transform the raw EMG signals into time-frequency representation (TFR). The TFRs or spectrogram images are then directly fed into convolutional neural network (CNN) for classification. Two CNN models are proposed to learn the features automatically from the images without the need of manual feature extraction. The performance of CNN with different number of convolutional layers is examined. The proposed CNN models are evaluated using the EMG data from 10 intact and 11 amputee subjects through the publicly access NinaPro database. Our results show that CNN classifier offered the best mean classification accuracy of 88.04% in recognizing hand and wrist movements.


Author(s):  
Wanli Wang ◽  
Botao Zhang ◽  
Kaiqi Wu ◽  
Sergey A Chepinskiy ◽  
Anton A Zhilenkov ◽  
...  

In this paper, a hybrid method based on deep learning is proposed to visually classify terrains encountered by mobile robots. Considering the limited computing resource on mobile robots and the requirement for high classification accuracy, the proposed hybrid method combines a convolutional neural network with a support vector machine to keep a high classification accuracy while improve work efficiency. The key idea is that the convolutional neural network is used to finish a multi-class classification and simultaneously the support vector machine is used to make a two-class classification. The two-class classification performed by the support vector machine is aimed at one kind of terrain that users are mostly concerned with. Results of the two classifications will be consolidated to get the final classification result. The convolutional neural network used in this method is modified for the on-board usage of mobile robots. In order to enhance efficiency, the convolutional neural network has a simple architecture. The convolutional neural network and the support vector machine are trained and tested by using RGB images of six kinds of common terrains. Experimental results demonstrate that this method can help robots classify terrains accurately and efficiently. Therefore, the proposed method has a significant potential for being applied to the on-board usage of mobile robots.


2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Saad Albawi ◽  
Oguz Bayat ◽  
Saad Al-Azawi ◽  
Osman N. Ucan

Recently, social touch gesture recognition has been considered an important topic for touch modality, which can lead to highly efficient and realistic human-robot interaction. In this paper, a deep convolutional neural network is selected to implement a social touch recognition system for raw input samples (sensor data) only. The touch gesture recognition is performed using a dataset previously measured with numerous subjects that perform varying social gestures. This dataset is dubbed as the corpus of social touch, where touch was performed on a mannequin arm. A leave-one-subject-out cross-validation method is used to evaluate system performance. The proposed method can recognize gestures in nearly real time after acquiring a minimum number of frames (the average range of frame length was from 0.2% to 4.19% from the original frame lengths) with a classification accuracy of 63.7%. The achieved classification accuracy is competitive in terms of the performance of existing algorithms. Furthermore, the proposed system outperforms other classification algorithms in terms of classification ratio and touch recognition time without data preprocessing for the same dataset.


2020 ◽  
Author(s):  
Alessandro Lopopolo ◽  
Antal van den Bosch

Neural decoding of speech and language refers to the extraction of information regarding the stimulus and the mental state of subjects from recordings of their brain activity while performing linguistic tasks. Recent years have seen significant progress in the decoding of speech from cortical activity. This study instead focuses on decoding linguistic information. We present a deep parallel temporal convolutional neural network (1DCNN) trained on part-of-speech (PoS) classification from magnetoencephalography (MEG) data collected during natural language reading. The network is trained on data from 15 human subjects separately, and yields above-chance accuracies on test data for all of them. The level of PoS was targeted because it offers a clean linguistic benchmark level that represents syntactic information and abstracts away from semantic or conceptual representations.


2020 ◽  
Vol 12 (6) ◽  
pp. 944 ◽  
Author(s):  
Jin Zhang ◽  
Hao Feng ◽  
Qingli Luo ◽  
Yu Li ◽  
Jujie Wei ◽  
...  

Oil spill detection plays an important role in marine environment protection. Quad-polarimetric Synthetic Aperture Radar (SAR) has been proved to have great potential for this task, and different SAR polarimetric features have the advantages to recognize oil spill areas from other look-alikes. In this paper we proposed an oil spill detection method based on convolutional neural network (CNN) and Simple Linear Iterative Clustering (SLIC) superpixel. Experiments were conducted on three Single Look Complex (SLC) quad-polarimetric SAR images obtained by Radarsat-2 and Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). Several groups of polarized parameters, including H/A/Alpha decomposition, Single-Bounce Eigenvalue Relative Difference (SERD), correlation coefficients, conformity coefficients, Freeman 3-component decomposition, Yamaguchi 4-component decomposition were extracted as feature sets. Among all considered polarimetric features, Yamaguchi parameters achieved the highest performance with total Mean Intersection over Union (MIoU) of 90.5%. It is proved that the SLIC superpixel method significantly improved the oil spill classification accuracy on all the polarimetric feature sets. The classification accuracy of all kinds of targets types were improved, and the largest increase on mean MIoU of all features sets was on emulsions by 21.9%.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shipu Xu ◽  
Runlong Li ◽  
Yunsheng Wang ◽  
Yong Liu ◽  
Wenwen Hu ◽  
...  

With the increasing of depth and complexity of the convolutional neural network, parameter dimensionality and volume of computing have greatly restricted its applications. Based on the SqueezeNet network structure, this study introduces a block convolution and uses channel shuffle between blocks to alleviate the information jam. The method is aimed at reducing the dimensionality of parameters of in an original network structure and improving the efficiency of network operation. The verification performance of the ORL dataset shows that the classification accuracy and convergence efficiency are not reduced or even slightly improved when the network parameters are reduced, which supports the validity of block convolution in structure lightweight. Moreover, using a classic CIFAR-10 dataset, this network decreases parameter dimensionality while accelerating computational processing, with excellent convergence stability and efficiency when the network accuracy is only reduced by 1.3%.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Jianghui Wen ◽  
Yeshu Liu ◽  
Yu Shi ◽  
Haoran Huang ◽  
Bing Deng ◽  
...  

Abstract Background Long-chain non-coding RNA (lncRNA) is closely related to many biological activities. Since its sequence structure is similar to that of messenger RNA (mRNA), it is difficult to distinguish between the two based only on sequence biometrics. Therefore, it is particularly important to construct a model that can effectively identify lncRNA and mRNA. Results First, the difference in the k-mer frequency distribution between lncRNA and mRNA sequences is considered in this paper, and they are transformed into the k-mer frequency matrix. Moreover, k-mers with more species are screened by relative entropy. The classification model of the lncRNA and mRNA sequences is then proposed by inputting the k-mer frequency matrix and training the convolutional neural network. Finally, the optimal k-mer combination of the classification model is determined and compared with other machine learning methods in humans, mice and chickens. The results indicate that the proposed model has the highest classification accuracy. Furthermore, the recognition ability of this model is verified to a single sequence. Conclusion We established a classification model for lncRNA and mRNA based on k-mers and the convolutional neural network. The classification accuracy of the model with 1-mers, 2-mers and 3-mers was the highest, with an accuracy of 0.9872 in humans, 0.8797 in mice and 0.9963 in chickens, which is better than those of the random forest, logistic regression, decision tree and support vector machine.


2019 ◽  
Vol 9 (16) ◽  
pp. 3362 ◽  
Author(s):  
Shang Shang ◽  
Ling Long ◽  
Sijie Lin ◽  
Fengyu Cong

Zebrafish eggs are widely used in biological experiments to study the environmental and genetic influence on embryo development. Due to the high throughput of microscopic imaging, automated analysis of zebrafish egg microscopic images is highly demanded. However, machine learning algorithms for zebrafish egg image analysis suffer from the problems of small imbalanced training dataset and subtle inter-class differences. In this study, we developed an automated zebrafish egg microscopic image analysis algorithm based on deep convolutional neural network (CNN). To tackle the problem of insufficient training data, the strategies of transfer learning and data augmentation were used. We also adopted the global averaged pooling technique to overcome the subtle phenotype differences between the fertilized and unfertilized eggs. Experimental results of a five-fold cross-validation test showed that the proposed method yielded a mean classification accuracy of 95.0% and a maximum accuracy of 98.8%. The network also demonstrated higher classification accuracy and better convergence performance than conventional CNN methods. This study extends the deep learning technique to zebrafish egg phenotype classification and paves the way for automatic bright-field microscopic image analysis.


Sign in / Sign up

Export Citation Format

Share Document