scholarly journals EEG Emotion Classification Using an Improved SincNet-Based Deep Learning Model

2019 ◽  
Vol 9 (11) ◽  
pp. 326 ◽  
Author(s):  
Hong Zeng ◽  
Zhenhua Wu ◽  
Jiaming Zhang ◽  
Chen Yang ◽  
Hua Zhang ◽  
...  

Deep learning (DL) methods have been used increasingly widely, such as in the fields of speech and image recognition. However, how to design an appropriate DL model to accurately and efficiently classify electroencephalogram (EEG) signals is still a challenge, mainly because EEG signals are characterized by significant differences between two different subjects or vary over time within a single subject, non-stability, strong randomness, low signal-to-noise ratio. SincNet is an efficient classifier for speaker recognition, but it has some drawbacks in dealing with EEG signals classification. In this paper, we improve and propose a SincNet-based classifier, SincNet-R, which consists of three convolutional layers, and three deep neural network (DNN) layers. We then make use of SincNet-R to test the classification accuracy and robustness by emotional EEG signals. The comparable results with original SincNet model and other traditional classifiers such as CNN, LSTM and SVM, show that our proposed SincNet-R model has higher classification accuracy and better algorithm robustness.

Author(s):  
Rahul Sharma ◽  
Pradip Sircar ◽  
Ram Bilas Pachori

A neurological abnormality in the brain that manifests as a seizure is the prime risk of epilepsy. The earlier and accurate detection of the epileptic seizure is the foremost task for the diagnosis of epilepsy. In this chapter, a nonlinear deep neural network is used for seizure classification. The proposed network is based on the autoencoder that significantly explores the non-linear dynamics of the electroencephalogram (EEG) signals. It involves the traditional deep neural domain expertise to extract the features from the raw data in order to fit a deep neural network-based learning model and predicts the class of the unknown seizures. The EEG signals are subjected to an autoencoder-based neural network that unintendedly extracts the significant attributes that are applied to the softmax classifier. The achieved classification accuracy is up to 100% on different publicly available Bonn University database classes. The proposed algorithm is suitable for real-time implementation.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


Author(s):  
Shaoqiang Wang ◽  
Shudong Wang ◽  
Song Zhang ◽  
Yifan Wang

Abstract To automatically detect dynamic EEG signals to reduce the time cost of epilepsy diagnosis. In the signal recognition of electroencephalogram (EEG) of epilepsy, traditional machine learning and statistical methods require manual feature labeling engineering in order to show excellent results on a single data set. And the artificially selected features may carry a bias, and cannot guarantee the validity and expansibility in real-world data. In practical applications, deep learning methods can release people from feature engineering to a certain extent. As long as the focus is on the expansion of data quality and quantity, the algorithm model can learn automatically to get better improvements. In addition, the deep learning method can also extract many features that are difficult for humans to perceive, thereby making the algorithm more robust. Based on the design idea of ResNeXt deep neural network, this paper designs a Time-ResNeXt network structure suitable for time series EEG epilepsy detection to identify EEG signals. The accuracy rate of Time-ResNeXt in the detection of EEG epilepsy can reach 91.50%. The Time-ResNeXt network structure produces extremely advanced performance on the benchmark dataset (Berne-Barcelona dataset) and has great potential for improving clinical practice.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Sunil Kumar Prabhakar ◽  
Dong-Ok Won

To unlock information present in clinical description, automatic medical text classification is highly useful in the arena of natural language processing (NLP). For medical text classification tasks, machine learning techniques seem to be quite effective; however, it requires extensive effort from human side, so that the labeled training data can be created. For clinical and translational research, a huge quantity of detailed patient information, such as disease status, lab tests, medication history, side effects, and treatment outcomes, has been collected in an electronic format, and it serves as a valuable data source for further analysis. Therefore, a huge quantity of detailed patient information is present in the medical text, and it is quite a huge challenge to process it efficiently. In this work, a medical text classification paradigm, using two novel deep learning architectures, is proposed to mitigate the human efforts. The first approach is that a quad channel hybrid long short-term memory (QC-LSTM) deep learning model is implemented utilizing four channels, and the second approach is that a hybrid bidirectional gated recurrent unit (BiGRU) deep learning model with multihead attention is developed and implemented successfully. The proposed methodology is validated on two medical text datasets, and a comprehensive analysis is conducted. The best results in terms of classification accuracy of 96.72% is obtained with the proposed QC-LSTM deep learning model, and a classification accuracy of 95.76% is obtained with the proposed hybrid BiGRU deep learning model.


2021 ◽  
Author(s):  
Ana Siravenha ◽  
Walisson Gomes ◽  
Renan Tourinho ◽  
Sergio Viademonte ◽  
Bruno Gomes

Classification of electroencephalography (EEG) signals is a complex task. EEG is a non-stationary time process with low signal to noise ratio. Among many methods usedfor EEG classification, those based on Deep Learning (DL) have been relatively successful in providing high classification accuracies. In the present study we aimed at classify resting state EEGs measured from workers of a mining complex. Just after the EEG has been collected, the workers undergonetraining in a 4D virtual reality simulator that emulates the iron ore excavation from which parameters related to their performance were analyzed by the technical staff who classified the workers into four groups based on their productivity. Twoconvolutional neural networks (ConvNets) were then used to classify the workers EEG bases on the same productivity label provided by the technical staff. The neural data was used in three configurations in order to evaluate the amount of datarequired for a high accuracy classification. Isolated, the channel T5 achieved 83% of accuracy, the subtraction of channels P3 and Pz achieved 99% and using all channels simultaneously was 99.40% assertive. This study provides results that add to the recent literature showing that even simple DL architectures are able to handle complex time series such as the EEG. In addition, it pin points an application in industry with vast possibilities of expansion.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15 ◽  
Author(s):  
Hao Chao ◽  
Liang Dong ◽  
Yongli Liu ◽  
Baoyun Lu

Emotion recognition based on multichannel electroencephalogram (EEG) signals is a key research area in the field of affective computing. Traditional methods extract EEG features from each channel based on extensive domain knowledge and ignore the spatial characteristics and global synchronization information across all channels. This paper proposes a global feature extraction method that encapsulates the multichannel EEG signals into gray images. The maximal information coefficient (MIC) for all channels was first measured. Subsequently, an MIC matrix was constructed according to the electrode arrangement rules and represented by an MIC gray image. Finally, a deep learning model designed with two principal component analysis convolutional layers and a nonlinear transformation operation extracted the spatial characteristics and global interchannel synchronization features from the constructed feature images, which were then input to support vector machines to perform the emotion recognition tasks. Experiments were conducted on the benchmark dataset for emotion analysis using EEG, physiological, and video signals. The experimental results demonstrated that the global synchronization features and spatial characteristics are beneficial for recognizing emotions and the proposed deep learning model effectively mines and utilizes the two salient features.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Jian Kui Feng ◽  
Jing Jin ◽  
Ian Daly ◽  
Jiale Zhou ◽  
Yugang Niu ◽  
...  

Background. Due to the redundant information contained in multichannel electroencephalogram (EEG) signals, the classification accuracy of brain-computer interface (BCI) systems may deteriorate to a large extent. Channel selection methods can help to remove task-independent electroencephalogram (EEG) signals and hence improve the performance of BCI systems. However, in different frequency bands, brain areas associated with motor imagery are not exactly the same, which will result in the inability of traditional channel selection methods to extract effective EEG features. New Method. To address the above problem, this paper proposes a novel method based on common spatial pattern- (CSP-) rank channel selection for multifrequency band EEG (CSP-R-MF). It combines the multiband signal decomposition filtering and the CSP-rank channel selection methods to select significant channels, and then linear discriminant analysis (LDA) was used to calculate the classification accuracy. Results. The results showed that our proposed CSP-R-MF method could significantly improve the average classification accuracy compared with the CSP-rank channel selection method.


2020 ◽  
Vol 10 (9) ◽  
pp. 3036 ◽  
Author(s):  
Hongquan Qu ◽  
Yiping Shan ◽  
Yuzhe Liu ◽  
Liping Pang ◽  
Zhanli Fan ◽  
...  

Excessive mental workload will reduce work efficiency, but low mental workload will cause a waste of human resources. It is very significant to study the mental workload status of operators. The existing mental workload classification method is based on electroencephalogram (EEG) features, and its classification accuracy is often low because the channel signals recorded by the EEG electrodes are a group of mixed brain signals, which are similar to multi-source mixed speech signals. It is not wise to directly analyze the mixed signals in order to distinguish the feature of EEG signals. In this study, we propose a mental workload classification method based on EEG independent components (ICs) features, which borrows from the blind source separation (BSS) idea of mixed speech signals. This presented method uses independent component analysis (ICA) to obtain pure signals, i.e., ICs. The energy features of ICs are directly extracted for classifying the mental workload, since this method directly uses ICs energy features for feature extraction. Compared with the existing solution, the proposed method can obtain better classification results. The presented method might provide a way to realize a fast, accurate, and automatic mental workload classification.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 174
Author(s):  
Minkoo Kang ◽  
Gyeongsik Yang ◽  
Yeonho Yoo ◽  
Chuck Yoo

This paper presents “Proactive Congestion Notification” (PCN), a congestion-avoidance technique for distributed deep learning (DDL). DDL is widely used to scale out and accelerate deep neural network training. In DDL, each worker trains a copy of the deep learning model with different training inputs and synchronizes the model gradients at the end of each iteration. However, it is well known that the network communication for synchronizing model parameters is the main bottleneck in DDL. Our key observation is that the DDL architecture makes each worker generate burst traffic every iteration, which causes network congestion and in turn degrades the throughput of DDL traffic. Based on this observation, the key idea behind PCN is to prevent potential congestion by proactively regulating the switch queue length before DDL burst traffic arrives at the switch, which prepares the switches for handling incoming DDL bursts. In our evaluation, PCN improves the throughput of DDL traffic by 72% on average.


Sign in / Sign up

Export Citation Format

Share Document