scholarly journals Entropy and the Emotional Brain: Overview of a Research Field

2021 ◽  
Author(s):  
Beatriz García-Martínez ◽  
Antonio Fernández-Caballero ◽  
Arturo Martínez-Rodrigo

During the last years, there has been a notable increase in the number of studies focused on the assessment of brain dynamics for the recognition of emotional states by means of nonlinear methodologies. More precisely, different entropy metrics have been applied for the analysis of electroencephalographic recordings for the detection of emotions. In this sense, regularity-based entropy metrics, symbolic predictability-based entropy indices, and different multiscale and multilag variants of the aforementioned methods have been successfully tested in a series of studies for emotion recognition from the EEG recording. This chapter aims to unify all those contributions to this scientific area, summarizing the main discoverings recently achieved in this research field.

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 142
Author(s):  
Chunting Wan ◽  
Dongyi Chen ◽  
Zhiqi Huang ◽  
Xi Luo

Multimodal bio-signals acquisition based on wearable devices and using virtual reality (VR) as stimulus source are promising techniques in emotion recognition research field. Numerous studies have shown that emotional states can be better evoked through Immersive Virtual Environments (IVE). The main goal of this paper is to provide researchers with a system for emotion recognition in VR environments. In this paper, we present a wearable forehead bio-signals acquisition pad which is attached to Head-Mounted Displays (HMD), termed HMD Bio Pad. This system can simultaneously record emotion-related two-channel electroencephalography (EEG), one-channel electrodermal activity (EDA), photoplethysmograph (PPG) and skin temperature (SKT) signals. In addition, we develop a human-computer interaction (HCI) interface which researchers can carry out emotion recognition research using VR HMD as stimulus presentation device. To evaluate the performance of the proposed system, we conducted different experiments to validate the multimodal bio-signals quality, respectively. To validate EEG signal, we have assessed the performance in terms of EEG eyes-blink task and eyes-open and eyes-closed task. The EEG eyes-blink task indicates that the proposed system can achieve comparable EEG signal quality in comparison to the dedicated bio-signals measuring device. The eyes-open and eyes-closed task proves that the proposed system can efficiently record alpha rhythm. Then we used signal-to-noise ratio (SNR) and Skin Conductance Reaction (SCR) signal to validate the performance for EDA acquisition system. A filtered EDA signal, with a high mean SNR of 28.52 dB, is plotted on HCI interface. Moreover, the SCR signal related to stimulus response can be correctly extracted from EDA signal. The SKT acquisition system has been validated effectively by the temperature change experiment when subjects are in unpleasant emotion. The pulse rate (PR) estimated from PPG signal achieved the low mean average absolute error (AAE), which is 1.12 beats per minute (BPM) over 8 recordings. In summary, the proposed HMD Bio Pad offers a portable, comfortable and easy-to-wear device for recording bio-signals. The proposed system could contribute to emotion recognition research in VR environments.


Author(s):  
Miao Cheng ◽  
Ah Chung Tsoi

As a general means of expression, audio analysis and recognition have attracted much attention for its wide applications in real-life world. Audio emotion recognition (AER) attempts to understand the emotional states of human with the given utterance signals, and has been studied abroad for its further development on friendly human–machine interfaces. Though there have been several the-state-of-the-arts auditory methods devised to audio recognition, most of them focus on discriminative usage of acoustic features, while feedback efficiency of recognition demands is ignored. This makes possible application of AER, and rapid learning of emotion patterns is desired. In order to make predication of audio emotion possible, the speaker-dependent patterns of audio emotions are learned with multiresolution analysis, and fractal dimension (FD) features are calculated for acoustic feature extraction. Furthermore, it is able to efficiently learn the intrinsic characteristics of auditory emotions, while the utterance features are learned from FDs of each sub-band. Experimental results show the proposed method is able to provide comparative performance for AER.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258089
Author(s):  
Amelie M. Hübner ◽  
Ima Trempler ◽  
Corinna Gietmann ◽  
Ricarda I. Schubotz

Emotional sensations and inferring another’s emotional states have been suggested to depend on predictive models of the causes of bodily sensations, so-called interoceptive inferences. In this framework, higher sensibility for interoceptive changes (IS) reflects higher precision of interoceptive signals. The present study examined the link between IS and emotion recognition, testing whether individuals with higher IS recognize others’ emotions more easily and are more sensitive to learn from biased probabilities of emotional expressions. We recorded skin conductance responses (SCRs) from forty-six healthy volunteers performing a speeded-response task, which required them to indicate whether a neutral facial expression dynamically turned into a happy or fearful expression. Moreover, varying probabilities of emotional expressions by their block-wise base rate aimed to generate a bias for the more frequently encountered emotion. As a result, we found that individuals with higher IS showed lower thresholds for emotion recognition, reflected in decreased reaction times for emotional expressions especially of high intensity. Moreover, individuals with increased IS benefited more from a biased probability of an emotion, reflected in decreased reaction times for expected emotions. Lastly, weak evidence supporting a differential modulation of SCR by IS as a function of varying probabilities was found. Our results indicate that higher interoceptive sensibility facilitates the recognition of emotional changes and is accompanied by a more precise adaptation to emotion probabilities.


2021 ◽  
Author(s):  
Talieh Seyed Tabtabae

Automatic Emotion Recognition (AER) is an emerging research area in the Human-Computer Interaction (HCI) field. As Computers are becoming more and more popular every day, the study of interaction between humans (users) and computers is catching more attention. In order to have a more natural and friendly interface between humans and computers, it would be beneficial to give computers the ability to recognize situations the same way a human does. Equipped with an emotion recognition system, computers will be able to recognize their users' emotional state and show the appropriate reaction to that. In today's HCI systems, machines can recognize the speaker and also content of the speech, using speech recognition and speaker identification techniques. If machines are equipped with emotion recognition techniques, they can also know "how it is said" to react more appropriately, and make the interaction more natural. One of the most important human communication channels is the auditory channel which carries speech and vocal intonation. In fact people can perceive each other's emotional state by the way they talk. Therefore in this work the speech signals are analyzed in order to set up an automatic system which recognizes the human emotional state. Six discrete emotional states have been considered and categorized in this research: anger, happiness, fear, surprise, sadness, and disgust. A set of novel spectral features are proposed in this contribution. Two approaches are applied and the results are compared. In the first approach, all the acoustic features are extracted from consequent frames along the speech signals. The statistical values of features are considered to constitute the features vectors. Suport Vector Machine (SVM), which is a relatively new approach in the field of machine learning is used to classify the emotional states. In the second approach, spectral features are extracted from non-overlapping logarithmically-spaced frequency sub-bands. In order to make use of all the extracted information, sequence discriminant SVMs are adopted. The empirical results show that the employed techniques are very promising.


2021 ◽  
Vol 335 ◽  
pp. 04001
Author(s):  
Didar Dadebayev ◽  
Goh Wei Wei ◽  
Tan Ee Xion

Emotion recognition, as a branch of affective computing, has attracted great attention in the last decades as it can enable more natural brain-computer interface systems. Electroencephalography (EEG) has proven to be an effective modality for emotion recognition, with which user affective states can be tracked and recorded, especially for primitive emotional events such as arousal and valence. Although brain signals have been shown to correlate with emotional states, the effectiveness of proposed models is somewhat limited. The challenge is improving accuracy, while appropriate extraction of valuable features might be a key to success. This study proposes a framework based on incorporating fractal dimension features and recursive feature elimination approach to enhance the accuracy of EEG-based emotion recognition. The fractal dimension and spectrum-based features to be extracted and used for more accurate emotional state recognition. Recursive Feature Elimination will be used as a feature selection method, whereas the classification of emotions will be performed by the Support Vector Machine (SVM) algorithm. The proposed framework will be tested with a widely used public database, and results are expected to demonstrate higher accuracy and robustness compared to other studies. The contributions of this study are primarily about the improvement of the EEG-based emotion classification accuracy. There is a potential restriction of how generic the results can be as different EEG dataset might yield different results for the same framework. Therefore, experimenting with different EEG dataset and testing alternative feature selection schemes can be very interesting for future work.


2016 ◽  
Vol 7 (1) ◽  
pp. 58-68 ◽  
Author(s):  
Imen Trabelsi ◽  
Med Salim Bouhlel

Automatic Speech Emotion Recognition (SER) is a current research topic in the field of Human Computer Interaction (HCI) with a wide range of applications. The purpose of speech emotion recognition system is to automatically classify speaker's utterances into different emotional states such as disgust, boredom, sadness, neutral, and happiness. The speech samples in this paper are from the Berlin emotional database. Mel Frequency cepstrum coefficients (MFCC), Linear prediction coefficients (LPC), linear prediction cepstrum coefficients (LPCC), Perceptual Linear Prediction (PLP) and Relative Spectral Perceptual Linear Prediction (Rasta-PLP) features are used to characterize the emotional utterances using a combination between Gaussian mixture models (GMM) and Support Vector Machines (SVM) based on the Kullback-Leibler Divergence Kernel. In this study, the effect of feature type and its dimension are comparatively investigated. The best results are obtained with 12-coefficient MFCC. Utilizing the proposed features a recognition rate of 84% has been achieved which is close to the performance of humans on this database.


2019 ◽  
Vol 75 (8) ◽  
pp. 1658-1667 ◽  
Author(s):  
Ted Ruffman ◽  
Jamin Halberstadt ◽  
Janice Murray ◽  
Fiona Jack ◽  
Tina Vater

Abstract Objectives We examined empathic accuracy, comparing young versus older perceivers, and young versus older emoters. Empathic accuracy is related to but distinct from emotion recognition because perceiver judgments of emotion are based, not on what an emoter looks to be feeling, but on what an emoter says s/he is actually feeling. Method Young (≤30 years) and older (≥60 years) adults (“emoters”) were unobtrusively videotaped while watching movie clips designed to elicit specific emotional states. The emoter videos were then presented to young and older “perceivers,” who were instructed to infer what the emoters were feeling. Results As predicted, older perceivers’ empathic accuracy was less accurate relative to young perceivers. In addition, the emotions of young emoters were considerably easier to read than those of older emoters. There was also some evidence of an own-age advantage in emotion recognition in that older adults had particular difficulty assessing emotion in young faces. Discussion These findings have important implications for real-world social adjustment, with older adults experiencing a combination of less emotional transparency and worse understanding of emotional experience.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 511 ◽  
Author(s):  
Lizheng Pan ◽  
Zeming Yin ◽  
Shigang She ◽  
Aiguo Song

Emotion recognition realizing human inner perception has a very important application prospect in human-computer interaction. In order to improve the accuracy of emotion recognition, a novel method combining fused nonlinear features and team-collaboration identification strategy was proposed for emotion recognition using physiological signals. Four nonlinear features, namely approximate entropy (ApEn), sample entropy (SaEn), fuzzy entropy (FuEn) and wavelet packet entropy (WpEn) are employed to reflect emotional states deeply with each type of physiological signal. Then the features of different physiological signals are fused to represent the emotional states from multiple perspectives. Each classifier has its own advantages and disadvantages. In order to make full use of the advantages of other classifiers and avoid the limitation of single classifier, the team-collaboration model is built and the team-collaboration decision-making mechanism is designed according to the proposed team-collaboration identification strategy which is based on the fusion of support vector machine (SVM), decision tree (DT) and extreme learning machine (ELM). Through analysis, SVM is selected as the main classifier with DT and ELM as auxiliary classifiers. According to the designed decision-making mechanism, the proposed team-collaboration identification strategy can effectively employ different classification methods to make decision based on the characteristics of the samples through SVM classification. For samples which are easy to be identified by SVM, SVM directly determines the identification results, whereas SVM-DT-ELM collaboratively determines the identification results, which can effectively utilize the characteristics of each classifier and improve the classification accuracy. The effectiveness and universality of the proposed method are verified by Augsburg database and database for emotion analysis using physiological (DEAP) signals. The experimental results uniformly indicated that the proposed method combining fused nonlinear features and team-collaboration identification strategy presents better performance than the existing methods.


2019 ◽  
Vol 18 (04) ◽  
pp. 1359-1378
Author(s):  
Jianzhuo Yan ◽  
Hongzhi Kuai ◽  
Jianhui Chen ◽  
Ning Zhong

Emotion recognition is a highly noteworthy and challenging work in both cognitive science and affective computing. Currently, neurobiology studies have revealed the partially synchronous oscillating phenomenon within brain, which needs to be analyzed from oscillatory synchronization. This combination of oscillations and synchronism is worthy of further exploration to achieve inspiring learning of the emotion recognition models. In this paper, we propose a novel approach of valence and arousal-based emotion recognition using EEG data. First, we construct the emotional oscillatory brain network (EOBN) inspired by the partially synchronous oscillating phenomenon for emotional valence and arousal. And then, a coefficient of variation and Welch’s [Formula: see text]-test based feature selection method is used to identify the core pattern (cEOBN) within EOBN for different emotional dimensions. Finally, an emotional recognition model (ERM) is built by combining cEOBN-inspired information obtained in the above process and different classifiers. The proposed approach can combine oscillation and synchronization characteristics of multi-channel EEG signals for recognizing different emotional states under the valence and arousal dimensions. The cEOBN-based inspired information can effectively reduce the dimensionality of the data. The experimental results show that the previous method can be used to detect affective state at a reasonable level of accuracy.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 609 ◽  
Author(s):  
Gao ◽  
Cui ◽  
Wan ◽  
Gu

Exploring the manifestation of emotion in electroencephalogram (EEG) signals is helpful for improving the accuracy of emotion recognition. This paper introduced the novel features based on the multiscale information analysis (MIA) of EEG signals for distinguishing emotional states in four dimensions based on Russell's circumplex model. The algorithms were applied to extract features on the DEAP database, which included multiscale EEG complexity index in the time domain, and ensemble empirical mode decomposition enhanced energy and fuzzy entropy in the frequency domain. The support vector machine and cross validation method were applied to assess classification accuracy. The classification performance of MIA methods (accuracy = 62.01%, precision = 62.03%, recall/sensitivity = 60.51%, and specificity = 82.80%) was much higher than classical methods (accuracy = 43.98%, precision = 43.81%, recall/sensitivity = 41.86%, and specificity = 70.50%), which extracted features contain similar energy based on a discrete wavelet transform, fractal dimension, and sample entropy. In this study, we found that emotion recognition is more associated with high frequency oscillations (51–100Hz) of EEG signals rather than low frequency oscillations (0.3–49Hz), and the significance of the frontal and temporal regions are higher than other regions. Such information has predictive power and may provide more insights into analyzing the multiscale information of high frequency oscillations in EEG signals.


Sign in / Sign up

Export Citation Format

Share Document