scholarly journals The challenges of emotion recognition methods based on electroencephalogram signals: a literature review

Author(s):  
I Made Agus Wirawan ◽  
Retantyo Wardoyo ◽  
Danang Lelono

Electroencephalogram (EEG) signals in recognizing emotions have several advantages. Still, the success of this study, however, is strongly influenced by: i) the distribution of the data used, ii) consider of differences in participant characteristics, and iii) consider the characteristics of the EEG signals. In response to these issues, this study will examine three important points that affect the success of emotion recognition packaged in several research questions: i) What factors need to be considered to generate and distribute EEG data?, ii) How can EEG signals be generated with consideration of differences in participant characteristics?, and iii) How do EEG signals with characteristics exist among its features for emotion recognition? The results, therefore, indicate some important challenges to be studied further in EEG signals-based emotion recognition research. These include i) determine robust methods for imbalanced EEG signals data, ii) determine the appropriate smoothing method to eliminate disturbances on the baseline signals, iii) determine the best baseline reduction methods to reduce the differences in the characteristics of the participants on the EEG signals, iv) determine the robust architecture of the capsule network method to overcome the loss of knowledge information and apply it in more diverse data set.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5092
Author(s):  
Tran-Dac-Thinh Phan ◽  
Soo-Hyung Kim ◽  
Hyung-Jeong Yang ◽  
Guee-Sang Lee

Besides facial or gesture-based emotion recognition, Electroencephalogram (EEG) data have been drawing attention thanks to their capability in countering the effect of deceptive external expressions of humans, like faces or speeches. Emotion recognition based on EEG signals heavily relies on the features and their delineation, which requires the selection of feature categories converted from the raw signals and types of expressions that could display the intrinsic properties of an individual signal or a group of them. Moreover, the correlation or interaction among channels and frequency bands also contain crucial information for emotional state prediction, and it is commonly disregarded in conventional approaches. Therefore, in our method, the correlation between 32 channels and frequency bands were put into use to enhance the emotion prediction performance. The extracted features chosen from the time domain were arranged into feature-homogeneous matrices, with their positions following the corresponding electrodes placed on the scalp. Based on this 3D representation of EEG signals, the model must have the ability to learn the local and global patterns that describe the short and long-range relations of EEG channels, along with the embedded features. To deal with this problem, we proposed the 2D CNN with different kernel-size of convolutional layers assembled into a convolution block, combining features that were distributed in small and large regions. Ten-fold cross validation was conducted on the DEAP dataset to prove the effectiveness of our approach. We achieved the average accuracies of 98.27% and 98.36% for arousal and valence binary classification, respectively.


Author(s):  
Shaoqiang Wang ◽  
Shudong Wang ◽  
Song Zhang ◽  
Yifan Wang

Abstract To automatically detect dynamic EEG signals to reduce the time cost of epilepsy diagnosis. In the signal recognition of electroencephalogram (EEG) of epilepsy, traditional machine learning and statistical methods require manual feature labeling engineering in order to show excellent results on a single data set. And the artificially selected features may carry a bias, and cannot guarantee the validity and expansibility in real-world data. In practical applications, deep learning methods can release people from feature engineering to a certain extent. As long as the focus is on the expansion of data quality and quantity, the algorithm model can learn automatically to get better improvements. In addition, the deep learning method can also extract many features that are difficult for humans to perceive, thereby making the algorithm more robust. Based on the design idea of ResNeXt deep neural network, this paper designs a Time-ResNeXt network structure suitable for time series EEG epilepsy detection to identify EEG signals. The accuracy rate of Time-ResNeXt in the detection of EEG epilepsy can reach 91.50%. The Time-ResNeXt network structure produces extremely advanced performance on the benchmark dataset (Berne-Barcelona dataset) and has great potential for improving clinical practice.


2014 ◽  
Vol 577 ◽  
pp. 1236-1240
Author(s):  
Dian Zhang ◽  
Bo Wang ◽  
Qing Liang Qin

A wireless portable electroencephalogram (EEG) recording system for animals was designed, manufactured and then tested in rats. The system basically consisted of four modules: 1) EEG collecting module with the wireless transmitter and receiver (designed by NRF24LE1), 2) filter bank consisting of pre-amplifier, band pass filter and 50Hz trapper, 3) power management module and 4) display interface for showing EEG signals. The EEG data were modulated firstly and emitted by the wireless transmitter after being amplified and filtered. The receiver demodulated and displayed the signals in voltage through serial port. The system was designed as surface mount devices (SMD) with small size (20mm×25mm×3mm) and light weight (4g), and was fabricated of electronic components that were commercially available. The test results indicated that in given environment the system could stably record more than 8 hours and transmit EEG signals over a distance of 20m. Our system showed the features of small size, low power consumption and high accuracy which were suitable for EEG telemetry in rats.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012044
Author(s):  
Lingzhi Chen ◽  
Wei Deng ◽  
Chunjin Ji

Abstract Pattern Recognition is the most important part of the brain computer interface (BCI) system. More and more profound learning methods were applied in BCI to increase the overall quality of pattern recognition accuracy, especially in the BCI based on Electroencephalogram (EEG) signal. Convolutional Neural Networks (CNN) holds great promises, which has been extensively employed for feature classification in BCI. This paper will review the application of the CNN method in BCI based on various EEG signals.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Lin Gan ◽  
Mu Zhang ◽  
Jiajia Jiang ◽  
Fajie Duan

People are ingesting various information from different sense organs all the time to complete different cognitive tasks. The brain integrates and regulates this information. The two significant sensory channels for receiving external information are sight and hearing that have received extensive attention. This paper mainly studies the effect of music and visual-auditory stimulation on electroencephalogram (EEG) of happy emotion recognition based on a complex system. In the experiment, the presentation was used to prepare the experimental stimulation program, and the cognitive neuroscience experimental paradigm of EEG evoked by happy emotion pictures was established. Using 93 videos as natural stimuli, fMRI data were collected. Finally, the collected EEG signals were removed with the eye artifact and baseline drift, and the t-test was used to analyze the significant differences of different lead EEG data. Experimental data shows that, by adjusting the parameters of the convolutional neural network, the highest accuracy of the two-classification algorithm can reach 98.8%, and the average accuracy can reach 83.45%. The results show that the brain source under the combined visual and auditory stimulus is not a simple superposition of the brain source of the single visual and auditory stimulation, but a new interactive source is generated.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7251
Author(s):  
Hong Zeng ◽  
Jiaming Zhang ◽  
Wael Zakaria ◽  
Fabio Babiloni ◽  
Borghini Gianluca ◽  
...  

Electroencephalogram (EEG) is an effective indicator for the detection of driver fatigue. Due to the significant differences in EEG signals across subjects, and difficulty in collecting sufficient EEG samples for analysis during driving, detecting fatigue across subjects through using EEG signals remains a challenge. EasyTL is a kind of transfer-learning model, which has demonstrated better performance in the field of image recognition, but not yet been applied in cross-subject EEG-based applications. In this paper, we propose an improved EasyTL-based classifier, the InstanceEasyTL, to perform EEG-based analysis for cross-subject fatigue mental-state detection. Experimental results show that InstanceEasyTL not only requires less EEG data, but also obtains better performance in accuracy and robustness than EasyTL, as well as existing machine-learning models such as Support Vector Machine (SVM), Transfer Component Analysis (TCA), Geodesic Flow Kernel (GFK), and Domain-adversarial Neural Networks (DANN), etc.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7103
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that illustrated surgical images reduce the negative emotional reactions that the photographic surgical images generate. The strong negative emotional reactions caused by surgical images, which show the internal structure of the human body (including blood, flesh, muscle, fatty tissue, and bone) act as an obstacle in explaining the images to patients or communicating with the images with non-professional people. We claim that the negative emotional reactions generated by illustrated surgical images are less severe than those caused by raw surgical images. To demonstrate the difference in emotional reaction, we produce several illustrated surgical images from photographs and measure the emotional reactions they engender using EEG biosignals; a deep learning-based emotion recognition model is applied to extract emotional reactions. Through this experiment, we show that the negative emotional reactions associated with photographic surgical images are much higher than those caused by illustrated versions of identical images. We further execute a self-assessed user survey to prove that the emotions recognized from EEG signals effectively represent user-annotated emotions.


2020 ◽  
Vol 65 (4) ◽  
pp. 393-404
Author(s):  
Ali Momennezhad

AbstractIn this paper, we suggest an efficient, accurate and user-friendly brain-computer interface (BCI) system for recognizing and distinguishing different emotion states. For this, we used a multimodal dataset entitled “MAHOB-HCI” which can be freely reached through an email request. This research is based on electroencephalogram (EEG) signals carrying emotions and excludes other physiological features, as it finds EEG signals more reliable to extract deep and true emotions compared to other physiological features. EEG signals comprise low information and signal-to-noise ratios (SNRs); so it is a huge challenge for proposing a robust and dependable emotion recognition algorithm. For this, we utilized a new method, based on the matching pursuit (MP) algorithm, to resolve this imperfection. We applied the MP algorithm for increasing the quality and SNRs of the original signals. In order to have a signal of high quality, we created a new dictionary including 5-scale Gabor atoms with 5000 atoms. For feature extraction, we used a 9-scale wavelet algorithm. A 32-electrode configuration was used for signal collection, but we used just eight electrodes out of that; therefore, our method is highly user-friendly and convenient for users. In order to evaluate the results, we compared our algorithm with other similar works. In average accuracy, the suggested algorithm is superior to the same algorithm without applying MP by 2.8% and in terms of f-score by 0.03. In comparison with corresponding works, the accuracy and f-score of the proposed algorithm are better by 10.15% and 0.1, respectively. So as it is seen, our method has improved past works in terms of accuracy, f-score and user-friendliness despite using just eight electrodes.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1262
Author(s):  
Fangyao Shen ◽  
Yong Peng ◽  
Wanzeng Kong ◽  
Guojun Dai

Emotion recognition has a wide range of potential applications in the real world. Among the emotion recognition data sources, electroencephalography (EEG) signals can record the neural activities across the human brain, providing us a reliable way to recognize the emotional states. Most of existing EEG-based emotion recognition studies directly concatenated features extracted from all EEG frequency bands for emotion classification. This way assumes that all frequency bands share the same importance by default; however, it cannot always obtain the optimal performance. In this paper, we present a novel multi-scale frequency bands ensemble learning (MSFBEL) method to perform emotion recognition from EEG signals. Concretely, we first re-organize all frequency bands into several local scales and one global scale. Then we train a base classifier on each scale. Finally we fuse the results of all scales by designing an adaptive weight learning method which automatically assigns larger weights to more important scales to further improve the performance. The proposed method is validated on two public data sets. For the “SEED IV” data set, MSFBEL achieves average accuracies of 82.75%, 87.87%, and 78.27% on the three sessions under the within-session experimental paradigm. For the “DEAP” data set, it obtains average accuracy of 74.22% for four-category classification under 5-fold cross validation. The experimental results demonstrate that the scale of frequency bands influences the emotion recognition rate, while the global scale that directly concatenating all frequency bands cannot always guarantee to obtain the best emotion recognition performance. Different scales provide complementary information to each other, and the proposed adaptive weight learning method can effectively fuse them to further enhance the performance.


2020 ◽  
Vol 6 (3) ◽  
pp. 255-287
Author(s):  
Wanrou Hu ◽  
Gan Huang ◽  
Linling Li ◽  
Li Zhang ◽  
Zhiguo Zhang ◽  
...  

Emotions, formed in the process of perceiving external environment, directly affect human daily life, such as social interaction, work efficiency, physical wellness, and mental health. In recent decades, emotion recognition has become a promising research direction with significant application values. Taking the advantages of electroencephalogram (EEG) signals (i.e., high time resolution) and video‐based external emotion evoking (i.e., rich media information), video‐triggered emotion recognition with EEG signals has been proven as a useful tool to conduct emotion‐related studies in a laboratory environment, which provides constructive technical supports for establishing real‐time emotion interaction systems. In this paper, we will focus on video‐triggered EEG‐based emotion recognition and present a systematical introduction of the current available video‐triggered EEG‐based emotion databases with the corresponding analysis methods. First, current video‐triggered EEG databases for emotion recognition (e.g., DEAP, MAHNOB‐HCI, SEED series databases) will be presented with full details. Then, the commonly used EEG feature extraction, feature selection, and modeling methods in video‐triggered EEG‐based emotion recognition will be systematically summarized and a brief review of current situation about video‐triggered EEG‐based emotion studies will be provided. Finally, the limitations and possible prospects of the existing video‐triggered EEG‐emotion databases will be fully discussed.


Sign in / Sign up

Export Citation Format

Share Document