scholarly journals Emotion recognition based on multi-channel EEG signals

2021 ◽  
Vol 2078 (1) ◽  
pp. 012028
Author(s):  
Huiping Shi ◽  
Hong Xie ◽  
Mengran Wu

Abstract Emotion recognition is a key technology of human-computer emotional interaction, which plays an important role in various fields and has attracted the attention of many researchers. However, the issue of interactivity and correlation between multi-channel EEG signals has not attracted much attention. For this reason, an EEG signal emotion recognition method based on 2DCNN-BiGRU and attention mechanism is tentatively proposed. This method firstly forms a two-dimensional matrix according to the electrode position, and then takes the pre-processed two-dimensional feature matrix as input, in the two-dimensional convolutional neural network (2DCNN) and the bidirectional gated recurrent unit (BiGRU) with the attention mechanism layer Extract spatial features and time domain features in, and finally classify by softmax function. The experimental results show that the average classification accuracy of this model are 93.66% and 94.32% in the valence and arousal, respectively.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yifeng Zhao ◽  
Deyun Chen

Due to the complexity of human emotions, there are some similarities between different emotion features. The existing emotion recognition method has the problems of difficulty of character extraction and low accuracy, so the bidirectional LSTM and attention mechanism based on the expression EEG multimodal emotion recognition method are proposed. Firstly, facial expression features are extracted based on the bilinear convolution network (BCN), and EEG signals are transformed into three groups of frequency band image sequences, and BCN is used to fuse the image features to obtain the multimodal emotion features of expression EEG. Then, through the LSTM with the attention mechanism, important data is extracted in the process of timing modeling, which effectively avoids the randomness or blindness of sampling methods. Finally, a feature fusion network with a three-layer bidirectional LSTM structure is designed to fuse the expression and EEG features, which is helpful to improve the accuracy of emotion recognition. On the MAHNOB-HCI and DEAP datasets, the proposed method is tested based on the MATLAB simulation platform. Experimental results show that the attention mechanism can enhance the visual effect of the image, and compared with other methods, the proposed method can extract emotion features from expressions and EEG signals more effectively, and the accuracy of emotion recognition is higher.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4543 ◽  
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Visual contents such as movies and animation evoke various human emotions. We examine an argument that the emotion from the visual contents may vary according to the contrast control of the scenes contained in the contents. We sample three emotions including positive, neutral and negative to prove our argument. We also sample several scenes of these emotions from visual contents and control the contrast of the scenes. We manipulate the contrast of the scenes and measure the change of valence and arousal from human participants who watch the contents using a deep emotion recognition module based on electroencephalography (EEG) signals. As a result, we conclude that the enhancement of contrast induces the increase of valence, while the reduction of contrast induces the decrease. Meanwhile, the contrast control affects arousal on a very minute scale.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1870
Author(s):  
Tianjiao Kong ◽  
Jie Shao ◽  
Jiuyuan Hu ◽  
Xin Yang ◽  
Shiyiling Yang ◽  
...  

Emotion recognition, as a challenging and active research area, has received considerable awareness in recent years. In this study, an attempt was made to extract complex network features from electroencephalogram (EEG) signals for emotion recognition. We proposed a novel method of constructing forward weighted horizontal visibility graphs (FWHVG) and backward weighted horizontal visibility graphs (BWHVG) based on angle measurement. The two types of complex networks were used to extract network features. Then, the two feature matrices were fused into a single feature matrix to classify EEG signals. The average emotion recognition accuracies based on complex network features of proposed method in the valence and arousal dimension were 97.53% and 97.75%. The proposed method achieved classification accuracies of 98.12% and 98.06% for valence and arousal when combined with time-domain features.


2021 ◽  
Vol 8 (8) ◽  
pp. 201976
Author(s):  
Zhihang Tian ◽  
Dongmin Huang ◽  
Sijin Zhou ◽  
Zhidan Zhao ◽  
Dazhi Jiang

In recent years, more and more researchers have focused on emotion recognition methods based on electroencephalogram (EEG) signals. However, most studies only consider the spatio-temporal characteristics of EEG and the modelling based on this feature, without considering personality factors, let alone studying the potential correlation between different subjects. Considering the particularity of emotions, different individuals may have different subjective responses to the same physical stimulus. Therefore, emotion recognition methods based on EEG signals should tend to be personalized. This paper models the personalized EEG emotion recognition from the macro and micro levels. At the macro level, we use personality characteristics to classify the individuals’ personalities from the perspective of ‘birds of a feather flock together’. At the micro level, we employ deep learning models to extract the spatio-temporal feature information of EEG. To evaluate the effectiveness of our method, we conduct an EEG emotion recognition experiment on the ASCERTAIN dataset. Our experimental results demonstrate that the recognition accuracy of our proposed method is 72.4% and 75.9% on valence and arousal, respectively, which is 10.2% and 9.1% higher than that of no consideration of personalization.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shruti Garg ◽  
Rahul Kumar Patro ◽  
Soumyajit Behera ◽  
Neha Prerna Tigga ◽  
Ranjita Pandey

PurposeThe purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.Design/methodology/approachClassical AMIGOS data set which comprises of multimodal records of varying lengths on mood, personality and other physiological aspects on emotional response is used for empirical assessment of the proposed overlapping sliding window (OSW) modelling framework. Two features are extracted using Fourier and Wavelet transforms: normalised band power (NBP) and normalised wavelet energy (NWE), respectively. The arousal, valence and dominance (AVD) emotions are predicted using one-dimension (1D) and two-dimensional (2D) convolution neural network (CNN) for both single and combined features.FindingsThe two-dimensional convolution neural network (2D CNN) outcomes on EEG signals of AMIGOS data set are observed to yield the highest accuracy, that is 96.63%, 95.87% and 96.30% for AVD, respectively, which is evidenced to be at least 6% higher as compared to the other available competitive approaches.Originality/valueThe present work is focussed on the less explored, complex AMIGOS (2018) data set which is imbalanced and of variable length. EEG emotion recognition-based work is widely available on simpler data sets. The following are the challenges of the AMIGOS data set addressed in the present work: handling of tensor form data; proposing an efficient method for generating sufficient equal-length samples corresponding to imbalanced and variable-length data.; selecting a suitable machine learning/deep learning model; improving the accuracy of the applied model.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3491 ◽  
Author(s):  
Jungchan Cho ◽  
Hyoseok Hwang

Emotion recognition plays an important role in the field of human–computer interaction (HCI). An electroencephalogram (EEG) is widely used to estimate human emotion owing to its convenience and mobility. Deep neural network (DNN) approaches using an EEG for emotion recognition have recently shown remarkable improvement in terms of their recognition accuracy. However, most studies in this field still require a separate process for extracting handcrafted features despite the ability of a DNN to extract meaningful features by itself. In this paper, we propose a novel method for recognizing an emotion based on the use of three-dimensional convolutional neural networks (3D CNNs), with an efficient representation of the spatio-temporal representations of EEG signals. First, we spatially reconstruct raw EEG signals represented as stacks of one-dimensional (1D) time series data to two-dimensional (2D) EEG frames according to the original electrode position. We then represent a 3D EEG stream by concatenating the 2D EEG frames to the time axis. These 3D reconstructions of the raw EEG signals can be efficiently combined with 3D CNNs, which have shown a remarkable feature representation from spatio-temporal data. Herein, we demonstrate the accuracy of the emotional classification of the proposed method through extensive experiments on the DEAP (a Dataset for Emotion Analysis using EEG, Physiological, and video signals) dataset. Experimental results show that the proposed method achieves a classification accuracy of 99.11%, 99.74%, and 99.73% in the binary classification of valence and arousal, and, in four-class classification, respectively. We investigate the spatio-temporal effectiveness of the proposed method by comparing it to several types of input methods with 2D/3D CNN. We then verify the best performing shape of both the kernel and input data experimentally. We verify that an efficient representation of an EEG and a network that fully takes advantage of the data characteristics can outperform methods that apply handcrafted features.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yanling An ◽  
Shaohai Hu ◽  
Xiaoying Duan ◽  
Ling Zhao ◽  
Caiyun Xie ◽  
...  

As one of the key technologies of emotion computing, emotion recognition has received great attention. Electroencephalogram (EEG) signals are spontaneous and difficult to camouflage, so they are used for emotion recognition in academic and industrial circles. In order to overcome the disadvantage that traditional machine learning based emotion recognition technology relies too much on a manual feature extraction, we propose an EEG emotion recognition algorithm based on 3D feature fusion and convolutional autoencoder (CAE). First, the differential entropy (DE) features of different frequency bands of EEG signals are fused to construct the 3D features of EEG signals, which retain the spatial information between channels. Then, the constructed 3D features are input into the CAE constructed in this paper for emotion recognition. In this paper, many experiments are carried out on the open DEAP dataset, and the recognition accuracy of valence and arousal dimensions are 89.49 and 90.76%, respectively. Therefore, the proposed method is suitable for emotion recognition tasks.


Sign in / Sign up

Export Citation Format

Share Document