Emotion recognition using time–frequency ridges of EEG signals based on multivariate synchrosqueezing transform

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ahmet Mert ◽  
Hasan Huseyin Celik

Abstract The feasibility of using time–frequency (TF) ridges estimation is investigated on multi-channel electroencephalogram (EEG) signals for emotional recognition. Without decreasing accuracy rate of the valence/arousal recognition, the informative component extraction with low computational cost will be examined using multivariate ridge estimation. The advanced TF representation technique called multivariate synchrosqueezing transform (MSST) is used to obtain well-localized components of multi-channel EEG signals. Maximum-energy components in the 2D TF distribution are determined using TF-ridges estimation to extract instantaneous frequency and instantaneous amplitude, respectively. The statistical values of the estimated ridges are used as a feature vector to the inputs of machine learning algorithms. Thus, component information in multi-channel EEG signals can be captured and compressed into low dimensional space for emotion recognition. Mean and variance values of the five maximum-energy ridges in the MSST based TF distribution are adopted as feature vector. Properties of five TF-ridges in frequency and energy plane (e.g., mean frequency, frequency deviation, mean energy, and energy deviation over time) are computed to obtain 20-dimensional feature space. The proposed method is performed on the DEAP emotional EEG recordings for benchmarking, and the recognition rates are yielded up to 71.55, and 70.02% for high/low arousal, and high/low valence, respectively.

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2739 ◽  
Author(s):  
Rami Alazrai ◽  
Rasha Homoud ◽  
Hisham Alwanni ◽  
Mohammad Daoud

Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73 . 8 % – 86 . 2 % . Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.


Author(s):  
Mehmet Akif Ozdemir ◽  
Ozlem Karabiber Cura ◽  
Aydin Akan

Epilepsy is one of the most common brain disorders worldwide. The most frequently used clinical tool to detect epileptic events and monitor epilepsy patients is the EEG recordings. There have been proposed many computer-aided diagnosis systems using EEG signals for the detection and prediction of seizures. In this study, a novel method based on Fourier-based Synchrosqueezing Transform (SST), which is a high-resolution time-frequency (TF) representation, and Convolutional Neural Network (CNN) is proposed to detect and predict seizure segments. SST is based on the reassignment of signal components in the TF plane which provides highly localized TF energy distributions. Epileptic seizures cause sudden energy discharges which are well represented in the TF plane by using the SST method. The proposed SST-based CNN method is evaluated using the IKCU dataset we collected, and the publicly available CHB-MIT dataset. Experimental results demonstrate that the proposed approach yields high average segment-based seizure detection precision and accuracy rates for both datasets (IKCU: 98.99% PRE and 99.06% ACC; CHB-MIT: 99.81% PRE and 99.63% ACC). Additionally, SST-based CNN approach provides significantly higher segment-based seizure prediction performance with 98.54% PRE and 97.92% ACC than similar approaches presented in the literature using the CHB-MIT dataset.


2022 ◽  
Vol 15 ◽  
Author(s):  
Zhaobo Li ◽  
Xinzui Wang ◽  
Weidong Shen ◽  
Shiming Yang ◽  
David Y. Zhao ◽  
...  

Purpose: Tinnitus is a common but obscure auditory disease to be studied. This study will determine whether the connectivity features in electroencephalography (EEG) signals can be used as the biomarkers for an efficient and fast diagnosis method for chronic tinnitus.Methods: In this study, the resting-state EEG signals of tinnitus patients with different tinnitus locations were recorded. Four connectivity features [including the Phase-locking value (PLV), Phase lag index (PLI), Pearson correlation coefficient (PCC), and Transfer entropy (TE)] and two time-frequency domain features in the EEG signals were extracted, and four machine learning algorithms, included two support vector machine models (SVM), a multi-layer perception network (MLP) and a convolutional neural network (CNN), were used based on the selected features to classify different possible tinnitus sources.Results: Classification accuracy was highest when the SVM algorithm or the MLP algorithm was applied to the PCC feature sets, achieving final average classification accuracies of 99.42 or 99.1%, respectively. And based on the PLV feature, the classification result was also particularly good. And MLP ran the fastest, with an average computing time of only 4.2 s, which was more suitable than other methods when a real-time diagnosis was required.Conclusion: Connectivity features of the resting-state EEG signals could characterize the differentiation of tinnitus location. The connectivity features (PCC and PLV) were more suitable as the biomarkers for the objective diagnosing of tinnitus. And the results were helpful for clinicians in the initial diagnosis of tinnitus.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jing Cai ◽  
Ruolan Xiao ◽  
Wenjie Cui ◽  
Shang Zhang ◽  
Guangda Liu

Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.


Author(s):  
Fabian Parsia George ◽  
Istiaque Mannafee Shaikat ◽  
Prommy Sultana Ferdawoos Hossain ◽  
Mohammad Zavid Parvez ◽  
Jia Uddin

The recognition of emotions is a vast significance and a high developing field of research in the recent years. The applications of emotion recognition have left an exceptional mark in various fields including education and research. Traditional approaches used facial expressions or voice intonation to detect emotions, however, facial gestures and spoken language can lead to biased and ambiguous results. This is why, researchers have started to use electroencephalogram (EEG) technique which is well defined method for emotion recognition. Some approaches used standard and pre-defined methods of the signal processing area and some worked with either fewer channels or fewer subjects to record EEG signals for their research. This paper proposed an emotion detection method based on time-frequency domain statistical features. Box-and-whisker plot is used to select the optimal features, which are later feed to SVM classifier for training and testing the DEAP dataset, where 32 participants with different gender and age groups are considered. The experimental results show that the proposed method exhibits 92.36% accuracy for our tested dataset. In addition, the proposed method outperforms than the state-of-art methods by exhibiting higher accuracy.


Diseases ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 89 ◽  
Author(s):  
Wilbert McClay

In Phase I, we collected data on five subjects yielding over 90% positive performance in Magnetoencephalographic (MEG) mid-and post-movement activity. In addition, a driver was developed that substituted the actions of the Brain Computer Interface (BCI) as mouse button presses for real-time use in visual simulations. The process was interfaced to a flight visualization demonstration utilizing left or right brainwave thought movement, the user experiences, the aircraft turning in the chosen direction, or on iOS Mobile Warfighter Videogame application. The BCI’s data analytics of a subject’s MEG brain waves and flight visualization performance videogame analytics were stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse. In Phase II portion of the project involves the Emotiv Encephalographic (EEG) Wireless Brain–Computer interfaces (BCIs) allow for people to establish a novel communication channel between the human brain and a machine, in this case, an iOS Mobile Application(s). The EEG BCI utilizes advanced and novel machine learning algorithms, as well as the Spark Directed Acyclic Graph (DAG), Cassandra NoSQL database environment, and also the competitor NoSQL MongoDB database for housing BCI analytics of subject’s response and users’ intent illustrated for both MEG/EEG brainwave signal acquisition. The wireless EEG signals that were acquired from the OpenVibe and the Emotiv EPOC headset can be connected via Bluetooth to an iPhone utilizing a thin Client architecture. The use of NoSQL databases were chosen because of its schema-less architecture and Map Reduce computational paradigm algorithm for housing a user’s brain signals from each referencing sensor. Thus, in the near future, if multiple users are playing on an online network connection and an MEG/EEG sensor fails, or if the connection is lost from the smartphone and the webserver due to low battery power or failed data transmission, it will not nullify the NoSQL document-oriented (MongoDB) or column-oriented Cassandra databases. Additionally, NoSQL databases have fast querying and indexing methodologies, which are perfect for online game analytics and technology. In Phase II, we collected data on five MEG subjects, yielding over 90% positive performance on iOS Mobile Applications with Objective-C and C++, however on EEG signals utilized on three subjects with the Emotiv wireless headsets and (n < 10) subjects from the OpenVibe EEG database the Variational Bayesian Factor Analysis Algorithm (VBFA) yielded below 60% performance and we are currently pursuing extending the VBFA algorithm to work in the time-frequency domain referred to as VBFA-TF to enhance EEG performance in the near future. The novel usage of NoSQL databases, Cassandra and MongoDB, were the primary main enhancements of the BCI Phase II MEG/EEG brain signal data acquisition, queries, and rapid analytics, with MapReduce and Spark DAG demonstrating future implications for next generation biometric MEG/EEG NoSQL databases.


Sign in / Sign up

Export Citation Format

Share Document