Classifier comparison using EEG features for emotion recognition process

Author(s):  
Laura Alejandra Martinez-Tejada ◽  
Natsue Yoshimura ◽  
Yasuharu Koike
2014 ◽  
Vol 144 ◽  
pp. 560-568 ◽  
Author(s):  
Giyoung Lee ◽  
Mingu Kwon ◽  
Swathi Kavuri Sri ◽  
Minho Lee

2017 ◽  
Vol 21 (6) ◽  
pp. 1003-1013 ◽  
Author(s):  
M. L. R. Menezes ◽  
A. Samara ◽  
L. Galway ◽  
A. Sant’Anna ◽  
A. Verikas ◽  
...  

2021 ◽  
Author(s):  
Zhen Liang ◽  
Xihao Zhang ◽  
Rushuang Zhou ◽  
Li Zhang ◽  
Linling Li ◽  
...  

How to effectively and efficiently extract valid and reliable features from high-dimensional electroencephalography (EEG), particularly how to fuse the spatial and temporal dynamic brain information into a better feature representation, is a critical issue in brain data analysis. Most current EEG studies work in a task driven manner and explore the valid EEG features with a supervised model, which would be limited by the given labels to a great extent. In this paper, we propose a practical hybrid unsupervised deep convolutional recurrent generative adversarial network based EEG feature characterization and fusion model, which is termed as EEGFuseNet. EEGFuseNet is trained in an unsupervised manner, and deep EEG features covering both spatial and temporal dynamics are automatically characterized. Comparing to the existing features, the characterized deep EEG features could be considered to be more generic and independent of any specific EEG task. The performance of the extracted deep and low-dimensional features by EEGFuseNet is carefully evaluated in an unsupervised emotion recognition application based on three public emotion databases. The results demonstrate the proposed EEGFuseNet is a robust and reliable model, which is easy to train and performs efficiently in the representation and fusion of dynamic EEG features. In particular, EEGFuseNet is established as an optimal unsupervised fusion model with promising cross-subject emotion recognition performance. It proves EEGFuseNet is capable of characterizing and fusing deep features that imply comparative cortical dynamic significance corresponding to the changing of different emotion states, and also demonstrates the possibility of realizing EEG based cross-subject emotion recognition in a pure unsupervised manner.


Author(s):  
Atiqul Islam Chowdhury ◽  
Mohammad Munem Shahriar ◽  
Ashraful Islam ◽  
Eshtiak Ahmed ◽  
Asif Karim ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Pengwei Zhang ◽  
Chongdan Min ◽  
Kangjia Zhang ◽  
Wen Xue ◽  
Jingxia Chen

Inspired by the neuroscience research results that the human brain can produce dynamic responses to different emotions, a new electroencephalogram (EEG)-based human emotion classification model was proposed, named R2G-ST-BiLSTM, which uses a hierarchical neural network model to learn more discriminative spatiotemporal EEG features from local to global brain regions. First, the bidirectional long- and short-term memory (BiLSTM) network is used to obtain the internal spatial relationship of EEG signals on different channels within and between regions of the brain. Considering the different effects of various cerebral regions on emotions, the regional attention mechanism is introduced in the R2G-ST-BiLSTM model to determine the weight of different brain regions, which could enhance or weaken the contribution of each brain area to emotion recognition. Then a hierarchical BiLSTM network is again used to learn the spatiotemporal EEG features from regional to global brain areas, which are then input into an emotion classifier. Especially, we introduce a domain discriminator to work together with the classifier to reduce the domain offset between the training and testing data. Finally, we make experiments on the EEG data of the DEAP and SEED datasets to test and compare the performance of the models. It is proven that our method achieves higher accuracy than those of the state-of-the-art methods. Our method provides a good way to develop affective brain–computer interface applications.


2021 ◽  
Author(s):  
Zeyu Wang ◽  
Ziqun Zhou ◽  
Haibin Shen ◽  
Qi Xu ◽  
Kejie Huang

<div>Electroencephalography (EEG) emotion recognition, an important task in Human-Computer Interaction (HCI), has made a great breakthrough with the help of deep learning algorithms. Although the application of attention mechanism on conventional models has improved its performance, most previous research rarely focused on multiplex EEG features jointly, lacking a compact model with unified attention modules. This study proposes Joint-Dimension-Aware Transformer (JDAT), a robust model based on squeezed Multi-head Self-Attention (MSA) mechanism for EEG emotion recognition. The adaptive squeezed MSA applied on multidimensional features enables JDAT to focus on diverse EEG information, including space, frequency, and time. Under the joint attention, JDAT is sensitive to the complicated brain activities, such as signal activation, phase-intensity couplings, and resonance. Moreover, its gradually compressed structure contains no recurrent or parallel modules, greatly reducing the memory and complexity, and accelerating the inference process. The proposed JDAT is evaluated on DEAP, DREAMER, and SEED datasets, and experimental results show that it outperforms state-of-the-art methods along with stronger flexibility.</div>


2018 ◽  
Vol 12 ◽  
Author(s):  
Xiang Li ◽  
Dawei Song ◽  
Peng Zhang ◽  
Yazhou Zhang ◽  
Yuexian Hou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document