scholarly journals Two-Level Domain Adaptation Neural Network for EEG-Based Emotion Recognition

2021 ◽  
Vol 14 ◽  
Author(s):  
Guangcheng Bao ◽  
Ning Zhuang ◽  
Li Tong ◽  
Bin Yan ◽  
Jun Shu ◽  
...  

Emotion recognition plays an important part in human-computer interaction (HCI). Currently, the main challenge in electroencephalogram (EEG)-based emotion recognition is the non-stationarity of EEG signals, which causes performance of the trained model decreasing over time. In this paper, we propose a two-level domain adaptation neural network (TDANN) to construct a transfer model for EEG-based emotion recognition. Specifically, deep features from the topological graph, which preserve topological information from EEG signals, are extracted using a deep neural network. These features are then passed through TDANN for two-level domain confusion. The first level uses the maximum mean discrepancy (MMD) to reduce the distribution discrepancy of deep features between source domain and target domain, and the second uses the domain adversarial neural network (DANN) to force the deep features closer to their corresponding class centers. We evaluated the domain-transfer performance of the model on both our self-built data set and the public data set SEED. In the cross-day transfer experiment, the ability to accurately discriminate joy from other emotions was high: sadness (84%), anger (87.04%), and fear (85.32%) on the self-built data set. The accuracy reached 74.93% on the SEED data set. In the cross-subject transfer experiment, the ability to accurately discriminate joy from other emotions was equally high: sadness (83.79%), anger (84.13%), and fear (81.72%) on the self-built data set. The average accuracy reached 87.9% on the SEED data set, which was higher than WGAN-DA. The experimental results demonstrate that the proposed TDANN can effectively handle the domain transfer problem in EEG-based emotion recognition.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shruti Garg ◽  
Rahul Kumar Patro ◽  
Soumyajit Behera ◽  
Neha Prerna Tigga ◽  
Ranjita Pandey

PurposeThe purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.Design/methodology/approachClassical AMIGOS data set which comprises of multimodal records of varying lengths on mood, personality and other physiological aspects on emotional response is used for empirical assessment of the proposed overlapping sliding window (OSW) modelling framework. Two features are extracted using Fourier and Wavelet transforms: normalised band power (NBP) and normalised wavelet energy (NWE), respectively. The arousal, valence and dominance (AVD) emotions are predicted using one-dimension (1D) and two-dimensional (2D) convolution neural network (CNN) for both single and combined features.FindingsThe two-dimensional convolution neural network (2D CNN) outcomes on EEG signals of AMIGOS data set are observed to yield the highest accuracy, that is 96.63%, 95.87% and 96.30% for AVD, respectively, which is evidenced to be at least 6% higher as compared to the other available competitive approaches.Originality/valueThe present work is focussed on the less explored, complex AMIGOS (2018) data set which is imbalanced and of variable length. EEG emotion recognition-based work is widely available on simpler data sets. The following are the challenges of the AMIGOS data set addressed in the present work: handling of tensor form data; proposing an efficient method for generating sufficient equal-length samples corresponding to imbalanced and variable-length data.; selecting a suitable machine learning/deep learning model; improving the accuracy of the applied model.


2021 ◽  
Vol 4 (3) ◽  
pp. 23-29
Author(s):  
Areej H. Al-Anbary ◽  
Salih M. Al-Qaraawi ‎

Recently, algorithms of machine learning are widely used with the field of electroencephalography (EEG)-Brain-Computer interfaces (BCI). In this paper, a sign language software model based on the EEG brain signal was implemented, to help the speechless persons to communicate their thoughts to others.  The preprocessing stage for the EEG signals was performed by applying the Principle Component Analysis (PCA) algorithm to extract the important features and reducing the data redundancy. A model for classifying ten classes of EEG signals, including  Facial Expression(FE) and some Motor Execution(ME) processes, had been designed. A neural network of three hidden layers with deep learning classifier had been used in this work. Data set from four different subjects were collected using a 14 channels Emotiv epoc+ device. A classification results with accuracy 95.75% were obtained ‎for the collected samples. An optimization process was performed on the predicted class with the aid of user, and then sign class will be connected to the specified sentence under a predesigned lock up table.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4736 ◽  
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

We present a multi-column CNN-based model for emotion recognition from EEG signals. Recently, a deep neural network is widely employed for extracting features and recognizing emotions from various biosignals including EEG signals. A decision from a single CNN-based emotion recognizing module shows improved accuracy than the conventional handcrafted feature-based modules. To further improve the accuracy of the CNN-based modules, we devise a multi-column structured model, whose decision is produced by a weighted sum of the decisions from individual recognizing modules. We apply the model to EEG signals from DEAP dataset for comparison and demonstrate the improved accuracy of our model.


2009 ◽  
Vol 18 (08) ◽  
pp. 1353-1367 ◽  
Author(s):  
DONG-CHUL PARK

A Centroid Neural Network with Weighted Features (CNN-WF) is proposed and presented in this paper. The proposed CNN-WF is based on a Centroid Neural Network (CNN), an effective clustering tool that has been successfully applied to various problems. In order to evaluate the importance of each feature in a set of data, a feature weighting concept is introduced to the Centroid Neural Network in the proposed algorithm. The weight update equations for CNN-WF are derived by applying the Lagrange multiplier procedure to the objective function constructed for CNN-WF in this paper. The use of weighted features makes it possible to assess the importance of each feature and to reject features that can be considered as noise in data. Experiments on a synthetic data set and a typical image compression problem show that the proposed CNN-WF can assess the importance of each feature and the proposed CNN-WF outperforms conventional algorithms including the Self-Organizing Map (SOM) and CNN in terms of clustering accuracy.


2021 ◽  
Author(s):  
Tomochika Fujisawa ◽  
Victor Noguerales ◽  
Emmanouil Meramveliotakis ◽  
Anna Papadopoulou ◽  
Alfried P Vogler

Complex bulk samples of invertebrates from biodiversity surveys present a great challenge for taxonomic identification, especially if obtained from unexplored ecosystems. High-throughput imaging combined with machine learning for rapid classification could overcome this bottleneck. Developing such procedures requires that taxonomic labels from an existing source data set are used for model training and prediction of an unknown target sample. Yet the feasibility of transfer learning for the classification of unknown samples remains to be tested. Here, we assess the efficiency of deep learning and domain transfer algorithms for family-level classification of below-ground bulk samples of Coleoptera from understudied forests of Cyprus. We trained neural network models with images from local surveys versus global databases of above-ground samples from tropical forests and evaluated how prediction accuracy was affected by: (a) the quality and resolution of images, (b) the size and complexity of the training set and (c) the transferability of identifications across very disparate source-target pairs that do not share any species or genera. Within-dataset classification accuracy reached 98% and depended on the number and quality of training images and on dataset complexity. The accuracy of between-datasets predictions was reduced to a maximum of 82% and depended greatly on the standardisation of the imaging procedure. When the source and target images were of similar quality and resolution, albeit from different faunas, the reduction of accuracy was minimal. Application of algorithms for domain adaptation significantly improved the prediction performance of models trained by non-standardised, low-quality images. Our findings demonstrate that existing databases can be used to train models and successfully classify images from unexplored biota, when the imaging conditions and classification algorithms are carefully considered. Also, our results provide guidelines for data acquisition and algorithmic development for high-throughput image-based biodiversity surveys.


Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 96 ◽  
Author(s):  
Xingliang Tang ◽  
Xianrui Zhang

Decoding motor imagery (MI) electroencephalogram (EEG) signals for brain-computer interfaces (BCIs) is a challenging task because of the severe non-stationarity of perceptual decision processes. Recently, deep learning techniques have had great success in EEG decoding because of their prominent ability to learn features from raw EEG signals automatically. However, the challenge that the deep learning method faces is that the shortage of labeled EEG signals and EEGs sampled from other subjects cannot be used directly to train a convolutional neural network (ConvNet) for a target subject. To solve this problem, in this paper, we present a novel conditional domain adaptation neural network (CDAN) framework for MI EEG signal decoding. Specifically, in the CDAN, a densely connected ConvNet is firstly applied to obtain high-level discriminative features from raw EEG time series. Then, a novel conditional domain discriminator is introduced to work as an adversarial with the label classifier to learn commonly shared intra-subjects EEG features. As a result, the CDAN model trained with sufficient EEG signals from other subjects can be used to classify the signals from the target subject efficiently. Competitive experimental results on a public EEG dataset (High Gamma Dataset) against the state-of-the-art methods demonstrate the efficacy of the proposed framework in recognizing MI EEG signals, indicating its effectiveness in automatic perceptual decision decoding.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1262
Author(s):  
Fangyao Shen ◽  
Yong Peng ◽  
Wanzeng Kong ◽  
Guojun Dai

Emotion recognition has a wide range of potential applications in the real world. Among the emotion recognition data sources, electroencephalography (EEG) signals can record the neural activities across the human brain, providing us a reliable way to recognize the emotional states. Most of existing EEG-based emotion recognition studies directly concatenated features extracted from all EEG frequency bands for emotion classification. This way assumes that all frequency bands share the same importance by default; however, it cannot always obtain the optimal performance. In this paper, we present a novel multi-scale frequency bands ensemble learning (MSFBEL) method to perform emotion recognition from EEG signals. Concretely, we first re-organize all frequency bands into several local scales and one global scale. Then we train a base classifier on each scale. Finally we fuse the results of all scales by designing an adaptive weight learning method which automatically assigns larger weights to more important scales to further improve the performance. The proposed method is validated on two public data sets. For the “SEED IV” data set, MSFBEL achieves average accuracies of 82.75%, 87.87%, and 78.27% on the three sessions under the within-session experimental paradigm. For the “DEAP” data set, it obtains average accuracy of 74.22% for four-category classification under 5-fold cross validation. The experimental results demonstrate that the scale of frequency bands influences the emotion recognition rate, while the global scale that directly concatenating all frequency bands cannot always guarantee to obtain the best emotion recognition performance. Different scales provide complementary information to each other, and the proposed adaptive weight learning method can effectively fuse them to further enhance the performance.


2016 ◽  
Vol 78 (12-2) ◽  
Author(s):  
Norma Alias ◽  
Husna Mohamad Mohsin ◽  
Maizatul Nadirah Mustaffa ◽  
Siti Hafilah Mohd Saimi ◽  
Ridhwan Reyaz

Eye movement behaviour is related to human brain activation either during asleep or awake. The aim of this paper is to measure the three types of eye movement by using the data classification of electroencephalogram (EEG) signals. It will be illustrated and train using the artificial neural network (ANN) method, in which the measurement of eye movement is based on eye blinks close and open, moves to the left and right as well as eye movement upwards and downwards. The integrated of ANN with EEG digital data signals is to train the large-scale digital data and thus predict the eye movement behaviour with stress activity. Since this study is using large-scale digital data, the parallelization of integrated ANN with EEG signals has been implemented on Compute Unified Device Architecture (CUDA) supported by heterogeneous CPU-GPU systems. The real data set from eye therapy industry, IC Herbz Sdn Bhd was carried out in order to validate and simulate the eye movement behaviour. Parallel performance analyses can be captured based on execution time, speedup, efficiency, and computational complexity.


Sign in / Sign up

Export Citation Format

Share Document