scholarly journals Multi-Feature Input Deep Forest for EEG-Based Emotion Recognition

2021 ◽  
Vol 14 ◽  
Author(s):  
Yinfeng Fang ◽  
Haiyang Yang ◽  
Xuguang Zhang ◽  
Han Liu ◽  
Bo Tao

Due to the rapid development of human–computer interaction, affective computing has attracted more and more attention in recent years. In emotion recognition, Electroencephalogram (EEG) signals are easier to be recorded than other physiological experiments and are not easily camouflaged. Because of the high dimensional nature of EEG data and the diversity of human emotions, it is difficult to extract effective EEG features and recognize the emotion patterns. This paper proposes a multi-feature deep forest (MFDF) model to identify human emotions. The EEG signals are firstly divided into several EEG frequency bands and then extract the power spectral density (PSD) and differential entropy (DE) from each frequency band and the original signal as features. A five-class emotion model is used to mark five emotions, including neutral, angry, sad, happy, and pleasant. With either original features or dimension reduced features as input, the deep forest is constructed to classify the five emotions. These experiments are conducted on a public dataset for emotion analysis using physiological signals (DEAP). The experimental results are compared with traditional classifiers, including K Nearest Neighbors (KNN), Random Forest (RF), and Support Vector Machine (SVM). The MFDF achieves the average recognition accuracy of 71.05%, which is 3.40%, 8.54%, and 19.53% higher than RF, KNN, and SVM, respectively. Besides, the accuracies with the input of features after dimension reduction and raw EEG signal are only 51.30 and 26.71%, respectively. The result of this study shows that the method can effectively contribute to EEG-based emotion classification tasks.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


2020 ◽  
Vol 11 (1) ◽  
pp. 1-16
Author(s):  
Rana Seif Fathalla ◽  
Wafa Saad Alshehri

Affective computing aims to create smart systems able to interact emotionally with users. For effective affective computing experiences, emotions should be detected accurately. The emotion influences appear in all the modalities of humans, such as the facial expression, voice, and body language, as well as in the different bio-parameters of the agents, such as the electro-dermal activity (EDA), the respiration patterns, the skin conductance, and the temperature as well as the brainwaves, which is called electroencephalography (EEG). This review provides an overview of the emotion recognition process, its methodology, and methods. It also explains the EEG-based emotion recognition as an example of emotion recognition methods demonstrating the required steps starting from capturing the EEG signals during the emotion elicitation process, then feature extraction using different techniques, such as empirical mode decomposition technique (EMD) and variational mode decomposition technique (VMD). Finally, emotion classification using different classifiers including the support vector machine (SVM) and deep neural network (DNN) is also highlighted.


2021 ◽  
Vol 335 ◽  
pp. 04001
Author(s):  
Didar Dadebayev ◽  
Goh Wei Wei ◽  
Tan Ee Xion

Emotion recognition, as a branch of affective computing, has attracted great attention in the last decades as it can enable more natural brain-computer interface systems. Electroencephalography (EEG) has proven to be an effective modality for emotion recognition, with which user affective states can be tracked and recorded, especially for primitive emotional events such as arousal and valence. Although brain signals have been shown to correlate with emotional states, the effectiveness of proposed models is somewhat limited. The challenge is improving accuracy, while appropriate extraction of valuable features might be a key to success. This study proposes a framework based on incorporating fractal dimension features and recursive feature elimination approach to enhance the accuracy of EEG-based emotion recognition. The fractal dimension and spectrum-based features to be extracted and used for more accurate emotional state recognition. Recursive Feature Elimination will be used as a feature selection method, whereas the classification of emotions will be performed by the Support Vector Machine (SVM) algorithm. The proposed framework will be tested with a widely used public database, and results are expected to demonstrate higher accuracy and robustness compared to other studies. The contributions of this study are primarily about the improvement of the EEG-based emotion classification accuracy. There is a potential restriction of how generic the results can be as different EEG dataset might yield different results for the same framework. Therefore, experimenting with different EEG dataset and testing alternative feature selection schemes can be very interesting for future work.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15 ◽  
Author(s):  
Hao Chao ◽  
Liang Dong ◽  
Yongli Liu ◽  
Baoyun Lu

Emotion recognition based on multichannel electroencephalogram (EEG) signals is a key research area in the field of affective computing. Traditional methods extract EEG features from each channel based on extensive domain knowledge and ignore the spatial characteristics and global synchronization information across all channels. This paper proposes a global feature extraction method that encapsulates the multichannel EEG signals into gray images. The maximal information coefficient (MIC) for all channels was first measured. Subsequently, an MIC matrix was constructed according to the electrode arrangement rules and represented by an MIC gray image. Finally, a deep learning model designed with two principal component analysis convolutional layers and a nonlinear transformation operation extracted the spatial characteristics and global interchannel synchronization features from the constructed feature images, which were then input to support vector machines to perform the emotion recognition tasks. Experiments were conducted on the benchmark dataset for emotion analysis using EEG, physiological, and video signals. The experimental results demonstrated that the global synchronization features and spatial characteristics are beneficial for recognizing emotions and the proposed deep learning model effectively mines and utilizes the two salient features.


Author(s):  
BÜLENT YILMAZ ◽  
CENGİZ GAZELOĞLU ◽  
FATİH ALTINDİŞ

Neuromarketing is the application of the neuroscientific approaches to analyze and understand economically relevant behavior. In this study, the effect of loud and rhythmic music in a sample neuromarketing setup is investigated. The second aim was to develop an approach in the prediction of preference using only brain signals. In this work, 19- channel EEG signals were recorded and two experimental paradigms were implemented: no music/silence and rhythmic, loud music using a headphone, while viewing women shoes. For each 10-sec epoch, normalized power spectral density (PSD) of EEG data for six frequency bands was estimated using the Burg method. The effect of music was investigated by comparing the mean differences between music and no music groups using independent two-sample t-test. In the preference prediction part sequential forward selection, k-nearest neighbors (k-NN) and the support vector machines (SVM), and 5-fold cross-validation approaches were used. It is found that music did not affect like decision in any of the power bands, on the contrary, music affected dislike decisions for all bands with no exceptions. Furthermore, the accuracies obtained in preference prediction study were between 77.5 and 82.5% for k-NN and SVM techniques. The results of the study showed the feasibility of using EEG signals in the investigation of the music effect on purchasing behavior and the prediction of preference of an individual.


2021 ◽  
pp. 330-342
Author(s):  
Nilima Gautam ◽  
Jagdish Lal Raheja ◽  
Rajesh Bhadada

Human beings' health is affected by physical ventures and emotional states endured by regular activities, which frequently develop attitudes and substantially affect health outcomes. Human Emotions play a vital role in deciding perception, cognition, memory, attention, reasoning, and decision-making. Several approaches have been used for automatically recognizing users' sentiment through images, speech, text, video, and physiological signals. Truthful detection of human emotions and personality behaviours can be advantageous for many situations, like interviews, group discussions, polygraphs, depressed persons, paralytic patients, blind people, shooters, etc. So, there is a need for an emotion recognizer. Though researchers tried several methods for emotion recognition, the accuracy of detection is always a question. The main aim is to develop a precise classification model for better accuracy of the emotion recognition system. Therefore, an emotion detector using GSR (Grove – GSR Sensor V1.2) sensor is proposed in the current research work. Twenty pupil groups were subjected to under observation for six different human activities, viz., happy, relax, stress, pain, reading, and math calculation. This research work was carried out in the lab of machine vision (CEERI) Central Electronics Engineering Research Institute Pilani, Jhunjhunu India. Moving average window method was used for data pre-processing. Supervised machine learning models viz., k-nearest neighbours (KNN), support vector machine (SVM), and decision tree (DT) was used for emotion classification. The decision tree model gives the best results with an average accuracy of 97.61%. Pain activity is most correctly recognized with greater than 99% accuracy.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Ayan Seal ◽  
Puthi Prem Nivesh Reddy ◽  
Pingali Chaithanya ◽  
Arramada Meghana ◽  
Kamireddy Jahnavi ◽  
...  

Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.


2020 ◽  
Vol 10 (5) ◽  
pp. 1619 ◽  
Author(s):  
Chao Pan ◽  
Cheng Shi ◽  
Honglang Mu ◽  
Jie Li ◽  
Xinbo Gao

Emotion plays a nuclear part in human attention, decision-making, and communication. Electroencephalogram (EEG)-based emotion recognition has developed a lot due to the application of Brain-Computer Interface (BCI) and its effectiveness compared to body expressions and other physiological signals. Despite significant progress in affective computing, emotion recognition is still an unexplored problem. This paper introduced Logistic Regression (LR) with Gaussian kernel and Laplacian prior for EEG-based emotion recognition. The Gaussian kernel enhances the EEG data separability in the transformed space. The Laplacian prior promotes the sparsity of learned LR regressors to avoid over-specification. The LR regressors are optimized using the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm. For simplicity, the introduced method is noted as LORSAL. Experiments were conducted on the dataset for emotion analysis using EEG, physiological and video signals (DEAP). Various spectral features and features by combining electrodes (power spectral density (PSD), differential entropy (DE), differential asymmetry (DASM), rational asymmetry (RASM), and differential caudality (DCAU)) were extracted from different frequency bands (Delta, Theta, Alpha, Beta, Gamma, and Total) with EEG signals. The Naive Bayes (NB), support vector machine (SVM), linear LR with L1-regularization (LR_L1), linear LR with L2-regularization (LR_L2) were used for comparison in the binary emotion classification for valence and arousal. LORSAL obtained the best classification accuracies (77.17% and 77.03% for valence and arousal, respectively) on the DE features extracted from total frequency bands. This paper also investigates the critical frequency bands in emotion recognition. The experimental results showed the superiority of Gamma and Beta bands in classifying emotions. It was presented that DE was the most informative and DASM and DCAU had lower computational complexity with relatively ideal accuracies. An analysis of LORSAL and the recently deep learning (DL) methods is included in the discussion. Conclusions and future work are presented in the final section.


2020 ◽  
pp. 1-9
Author(s):  
Alejandro JARILLO-SILVA ◽  
Víctor A. GOMEZ-PEREZ ◽  
Eduardo A. ESCOTTO-CÓRDOVA ◽  
Omar A. DOMÍNGUEZ-RAMÍREZ

The objective of this work is to present a procedure for the classification of basic emotions based on the analysis of EEG signals (electroencephalogram). For this case, 25 subjects were stimulated, of whom 17 were men and 9 women between 20 and 35 years of age. The stimulus to induce positive, negative and neutral emotions with a certain level of excitation (activation) was a set of video clips previously evaluated. The processed and analyzed signals belong to the gamma and beta frequency bands of the F3, F4, P7, P8, T7, T8, O1 and O2 electrodes. The characteristic variables with the best result are the entropy of each band of each electrode. The cross validation algorithms are applied and later the main component analysis algorithm. Finally, four classifier algorithms are used: classifier trees, Support- Vector-Machine (SVM), Linear-Discriminant-Analysis (LDA) and k-Nearest-Neighbors (KNN). The results confirm that by carrying out the proposed procedure, the EEG signals contain enough information to allow the recognition of basic emotions.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2694
Author(s):  
Sang-Yeong Jo ◽  
Jin-Woo Jeong

Visual memorability is a method to measure how easily media contents can be memorized. Predicting the visual memorability of media contents has recently become more important because it can affect the design principles of multimedia visualization, advertisement, etc. Previous studies on the prediction of the visual memorability of images generally exploited visual features (e.g., color intensity and contrast) or semantic information (e.g., class labels) that can be extracted from images. Some other works tried to exploit electroencephalography (EEG) signals of human subjects to predict the memorability of text (e.g., word pairs). Compared to previous works, we focus on predicting the visual memorability of images based on human biological feedback (i.e., EEG signals). For this, we design a visual memory task where each subject is asked to answer whether they correctly remember a particular image 30 min after glancing at a set of images sampled from the LaMemdataset. During the visual memory task, EEG signals are recorded from subjects as human biological feedback. The collected EEG signals are then used to train various classification models for prediction of image memorability. Finally, we evaluate and compare the performance of classification models, including deep convolutional neural networks and classical methods, such as support vector machines, decision trees, and k-nearest neighbors. The experimental results validate that the EEG-based prediction of memorability is still challenging, but a promising approach with various opportunities and potentials.


Sign in / Sign up

Export Citation Format

Share Document