scholarly journals Supervision of time-frequency features selection in EEG signals by a human expert for brain-computer interfacing based on motor imagery

Author(s):  
Alban Dupres ◽  
Francois Cabestaing ◽  
Jose Rouillard
2021 ◽  
Vol 15 ◽  
Author(s):  
Xiongliang Xiao ◽  
Yuee Fang

Brain computer interaction (BCI) based on EEG can help patients with limb dyskinesia to carry out daily life and rehabilitation training. However, due to the low signal-to-noise ratio and large individual differences, EEG feature extraction and classification have the problems of low accuracy and efficiency. To solve this problem, this paper proposes a recognition method of motor imagery EEG signal based on deep convolution network. This method firstly aims at the problem of low quality of EEG signal characteristic data, and uses short-time Fourier transform (STFT) and continuous Morlet wavelet transform (CMWT) to preprocess the collected experimental data sets based on time series characteristics. So as to obtain EEG signals that are distinct and have time-frequency characteristics. And based on the improved CNN network model to efficiently recognize EEG signals, to achieve high-quality EEG feature extraction and classification. Further improve the quality of EEG signal feature acquisition, and ensure the high accuracy and precision of EEG signal recognition. Finally, the proposed method is validated based on the BCI competiton dataset and laboratory measured data. Experimental results show that the accuracy of this method for EEG signal recognition is 0.9324, the precision is 0.9653, and the AUC is 0.9464. It shows good practicality and applicability.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2739 ◽  
Author(s):  
Rami Alazrai ◽  
Rasha Homoud ◽  
Hisham Alwanni ◽  
Mohammad Daoud

Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73 . 8 % – 86 . 2 % . Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3496
Author(s):  
Jiacan Xu ◽  
Hao Zheng ◽  
Jianhui Wang ◽  
Donglin Li ◽  
Xiaoke Fang

Recognition of motor imagery intention is one of the hot current research focuses of brain-computer interface (BCI) studies. It can help patients with physical dyskinesia to convey their movement intentions. In recent years, breakthroughs have been made in the research on recognition of motor imagery task using deep learning, but if the important features related to motor imagery are ignored, it may lead to a decline in the recognition performance of the algorithm. This paper proposes a new deep multi-view feature learning method for the classification task of motor imagery electroencephalogram (EEG) signals. In order to obtain more representative motor imagery features in EEG signals, we introduced a multi-view feature representation based on the characteristics of EEG signals and the differences between different features. Different feature extraction methods were used to respectively extract the time domain, frequency domain, time-frequency domain and spatial features of EEG signals, so as to made them cooperate and complement. Then, the deep restricted Boltzmann machine (RBM) network improved by t-distributed stochastic neighbor embedding(t-SNE) was adopted to learn the multi-view features of EEG signals, so that the algorithm removed the feature redundancy while took into account the global characteristics in the multi-view feature sequence, reduced the dimension of the multi-visual features and enhanced the recognizability of the features. Finally, support vector machine (SVM) was chosen to classify deep multi-view features. Applying our proposed method to the BCI competition IV 2a dataset we obtained excellent classification results. The results show that the deep multi-view feature learning method further improved the classification accuracy of motor imagery tasks.


Author(s):  
Subrota Mazumdar ◽  
Rohit Chaudhary ◽  
Suruchi Suruchi ◽  
Suman Mohanty ◽  
Divya Kumari ◽  
...  

In this chapter, a nearest neighbor (k-NN)-based method for efficient classification of motor imagery using EEG for brain-computer interfacing (BCI) applications has been proposed. Electroencephalogram (EEG) signals are obtained from multiple channels from brain. These EEG signals are taken as input features and given to the k-NN-based classifier to classify motor imagery. More specifically, the chapter gives an outline of the Berlin brain-computer interface that can be operated with minimal subject change. All the design and simulation works are carried out with MATLAB software. k-NN-based classifier is trained with data from continuous signals of EEG channels. After the network is trained, it is tested with various test cases. Performance of the network is checked in terms of percentage accuracy, which is found to be 99.25%. The result suggested that the proposed method is accurate for BCI applications.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 551 ◽  
Author(s):  
Mengxi Dai ◽  
Dezhi Zheng ◽  
Rui Na ◽  
Shuai Wang ◽  
Shuailei Zhang

Successful applications of brain-computer interface (BCI) approaches to motor imagery (MI) are still limited. In this paper, we propose a classification framework for MI electroencephalogram (EEG) signals that combines a convolutional neural network (CNN) architecture with a variational autoencoder (VAE) for classification. The decoder of the VAE generates a Gaussian distribution, so it can be used to fit the Gaussian distribution of EEG signals. A new representation of input was developed by combining the time, frequency, and channel information from the EEG signal, and the CNN-VAE method was designed and optimized accordingly for this form of input. In this network, the classification of the extracted CNN features is performed via the deep network VAE. Our framework, with an average kappa value of 0.564, outperforms the best classification method in the literature for BCI Competition IV dataset 2b with a 3% improvement. Furthermore, using our own dataset, the CNN-VAE framework also yields the best performance for both three-electrode and five-electrode EEGs and achieves the best average kappa values 0.568 and 0.603, respectively. Our results show that the proposed CNN-VAE method raises performance to the current state of the art.


Author(s):  
Pholpat Durongbhan ◽  
Yifan Zhao ◽  
Liangyu Chen ◽  
Panagiotis Zis ◽  
Matteo De Marco ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document