scholarly journals EEG-Based Emotion Recognition Using an Improved Weighted Horizontal Visibility Graph

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1870
Author(s):  
Tianjiao Kong ◽  
Jie Shao ◽  
Jiuyuan Hu ◽  
Xin Yang ◽  
Shiyiling Yang ◽  
...  

Emotion recognition, as a challenging and active research area, has received considerable awareness in recent years. In this study, an attempt was made to extract complex network features from electroencephalogram (EEG) signals for emotion recognition. We proposed a novel method of constructing forward weighted horizontal visibility graphs (FWHVG) and backward weighted horizontal visibility graphs (BWHVG) based on angle measurement. The two types of complex networks were used to extract network features. Then, the two feature matrices were fused into a single feature matrix to classify EEG signals. The average emotion recognition accuracies based on complex network features of proposed method in the valence and arousal dimension were 97.53% and 97.75%. The proposed method achieved classification accuracies of 98.12% and 98.06% for valence and arousal when combined with time-domain features.

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4543 ◽  
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Visual contents such as movies and animation evoke various human emotions. We examine an argument that the emotion from the visual contents may vary according to the contrast control of the scenes contained in the contents. We sample three emotions including positive, neutral and negative to prove our argument. We also sample several scenes of these emotions from visual contents and control the contrast of the scenes. We manipulate the contrast of the scenes and measure the change of valence and arousal from human participants who watch the contents using a deep emotion recognition module based on electroencephalography (EEG) signals. As a result, we conclude that the enhancement of contrast induces the increase of valence, while the reduction of contrast induces the decrease. Meanwhile, the contrast control affects arousal on a very minute scale.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7083
Author(s):  
Agnieszka Wosiak ◽  
Aleksandra Dura

Based on the growing interest in encephalography to enhance human–computer interaction (HCI) and develop brain–computer interfaces (BCIs) for control and monitoring applications, efficient information retrieval from EEG sensors is of great importance. It is difficult due to noise from the internal and external artifacts and physiological interferences. The enhancement of the EEG-based emotion recognition processes can be achieved by selecting features that should be taken into account in further analysis. Therefore, the automatic feature selection of EEG signals is an important research area. We propose a multistep hybrid approach incorporating the Reversed Correlation Algorithm for automated frequency band—electrode combinations selection. Our method is simple to use and significantly reduces the number of sensors to only three channels. The proposed method has been verified by experiments performed on the DEAP dataset. The obtained effects have been evaluated regarding the accuracy of two emotions—valence and arousal. In comparison to other research studies, our method achieved classification results that were 4.20–8.44% greater. Moreover, it can be perceived as a universal EEG signal classification technique, as it belongs to unsupervised methods.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012028
Author(s):  
Huiping Shi ◽  
Hong Xie ◽  
Mengran Wu

Abstract Emotion recognition is a key technology of human-computer emotional interaction, which plays an important role in various fields and has attracted the attention of many researchers. However, the issue of interactivity and correlation between multi-channel EEG signals has not attracted much attention. For this reason, an EEG signal emotion recognition method based on 2DCNN-BiGRU and attention mechanism is tentatively proposed. This method firstly forms a two-dimensional matrix according to the electrode position, and then takes the pre-processed two-dimensional feature matrix as input, in the two-dimensional convolutional neural network (2DCNN) and the bidirectional gated recurrent unit (BiGRU) with the attention mechanism layer Extract spatial features and time domain features in, and finally classify by softmax function. The experimental results show that the average classification accuracy of this model are 93.66% and 94.32% in the valence and arousal, respectively.


Author(s):  
Nhat Le ◽  
Khanh Nguyen ◽  
Anh Nguyen ◽  
Bac Le

AbstractHuman emotion recognition is an active research area in artificial intelligence and has made substantial progress over the past few years. Many recent works mainly focus on facial regions to infer human affection, while the surrounding context information is not effectively utilized. In this paper, we proposed a new deep network to effectively recognize human emotions using a novel global-local attention mechanism. Our network is designed to extract features from both facial and context regions independently, then learn them together using the attention module. In this way, both the facial and contextual information is used to infer human emotions, therefore enhancing the discrimination of the classifier. The intensive experiments show that our method surpasses the current state-of-the-art methods on recent emotion datasets by a fair margin. Qualitatively, our global-local attention module can extract more meaningful attention maps than previous methods. The source code and trained model of our network are available at https://github.com/minhnhatvt/glamor-net.


2021 ◽  
Vol 8 (8) ◽  
pp. 201976
Author(s):  
Zhihang Tian ◽  
Dongmin Huang ◽  
Sijin Zhou ◽  
Zhidan Zhao ◽  
Dazhi Jiang

In recent years, more and more researchers have focused on emotion recognition methods based on electroencephalogram (EEG) signals. However, most studies only consider the spatio-temporal characteristics of EEG and the modelling based on this feature, without considering personality factors, let alone studying the potential correlation between different subjects. Considering the particularity of emotions, different individuals may have different subjective responses to the same physical stimulus. Therefore, emotion recognition methods based on EEG signals should tend to be personalized. This paper models the personalized EEG emotion recognition from the macro and micro levels. At the macro level, we use personality characteristics to classify the individuals’ personalities from the perspective of ‘birds of a feather flock together’. At the micro level, we employ deep learning models to extract the spatio-temporal feature information of EEG. To evaluate the effectiveness of our method, we conduct an EEG emotion recognition experiment on the ASCERTAIN dataset. Our experimental results demonstrate that the recognition accuracy of our proposed method is 72.4% and 75.9% on valence and arousal, respectively, which is 10.2% and 9.1% higher than that of no consideration of personalization.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yanling An ◽  
Shaohai Hu ◽  
Xiaoying Duan ◽  
Ling Zhao ◽  
Caiyun Xie ◽  
...  

As one of the key technologies of emotion computing, emotion recognition has received great attention. Electroencephalogram (EEG) signals are spontaneous and difficult to camouflage, so they are used for emotion recognition in academic and industrial circles. In order to overcome the disadvantage that traditional machine learning based emotion recognition technology relies too much on a manual feature extraction, we propose an EEG emotion recognition algorithm based on 3D feature fusion and convolutional autoencoder (CAE). First, the differential entropy (DE) features of different frequency bands of EEG signals are fused to construct the 3D features of EEG signals, which retain the spatial information between channels. Then, the constructed 3D features are input into the CAE constructed in this paper for emotion recognition. In this paper, many experiments are carried out on the open DEAP dataset, and the recognition accuracy of valence and arousal dimensions are 89.49 and 90.76%, respectively. Therefore, the proposed method is suitable for emotion recognition tasks.


2020 ◽  
Vol 2020 (9) ◽  
pp. 323-1-323-8
Author(s):  
Litao Hu ◽  
Zhenhua Hu ◽  
Peter Bauer ◽  
Todd J. Harris ◽  
Jan P. Allebach

Image quality assessment has been a very active research area in the field of image processing, and there have been numerous methods proposed. However, most of the existing methods focus on digital images that only or mainly contain pictures or photos taken by digital cameras. Traditional approaches evaluate an input image as a whole and try to estimate a quality score for the image, in order to give viewers an idea of how “good” the image looks. In this paper, we mainly focus on the quality evaluation of contents of symbols like texts, bar-codes, QR-codes, lines, and hand-writings in target images. Estimating a quality score for this kind of information can be based on whether or not it is readable by a human, or recognizable by a decoder. Moreover, we mainly study the viewing quality of the scanned document of a printed image. For this purpose, we propose a novel image quality assessment algorithm that is able to determine the readability of a scanned document or regions in a scanned document. Experimental results on some testing images demonstrate the effectiveness of our method.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 52
Author(s):  
Tianyi Zhang ◽  
Abdallah El Ali ◽  
Chen Wang ◽  
Alan Hanjalic ◽  
Pablo Cesar

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.


2021 ◽  
pp. 1-12
Author(s):  
Gregorio González-Alcaide ◽  
Mercedes Fernández-Ríos ◽  
Rosa Redolat ◽  
Emilia Serra

Background: The study of emotion recognition could be crucial for detecting alterations in certain cognitive areas or as an early sign of neurological disorders. Objective: The main objective of the study is to characterize research development on emotion recognition, identifying the intellectual structure that supports this area of knowledge, and the main lines of research attracting investigators’ interest. Methods: We identified publications on emotion recognition and dementia included in the Web of Science Core Collection, analyzing the scientific output and main disciplines involved in generating knowledge in the area. A co-citation analysis and an analysis of the bibliographic coupling between the retrieved documents elucidated the thematic orientations of the research and the reference works that constitute the foundation for development in the field. Results: A total of 345 documents, with 24,282 bibliographic references between them, were included. This is an emerging research area, attracting the interest of investigators in Neurosciences, Psychology, Clinical Neurology, and Psychiatry, among other disciplines. Four prominent topic areas were identified, linked to frontotemporal dementia, autism spectrum disorders, Alzheimer’s disease, and Parkinson’s and Huntington disease. Many recent papers focus on the detection of mild cognitive impairment. Conclusion: Impaired emotion recognition may be a key sign facilitating the diagnosis and early treatment of different neurodegenerative diseases as well as for triggering the necessary provision of social and family support, explaining the growing research interest in this area.


Sign in / Sign up

Export Citation Format

Share Document