scholarly journals Classification of complex emotions using EEG and virtual environment: proof of concept and therapeutic implication

2020 ◽  
Author(s):  
Eleonora De Filippi ◽  
Mara Wolter ◽  
Bruno Melo ◽  
Carlos J. Tierra-Criollo ◽  
Tiago Bortolini ◽  
...  

AbstractDuring the last decades, neurofeedback training for emotional self-regulation has received significant attention from both the scientific and clinical communities. However, most studies have focused on broader emotional states such as “negative vs. positive”, primarily due to our poor understanding of the functional anatomy of more complex emotions at the electrophysiological level. Our proof-of-concept study aims at investigating the feasibility of classifying two complex emotions that have been implicated in mental health, namely tenderness and anguish, using features extracted from the electroencephalogram (EEG) signal in healthy participants. Electrophysiological data were recorded from fourteen participants during a block-designed experiment consisting of emotional self-induction trials combined with a multimodal virtual scenario. For the within-subject classification, the linear Support Vector Machine was trained with two sets of samples: random cross-validation of the sliding windows of all trials; and 2) strategic cross-validation, assigning all the windows of one trial to the same fold. Spectral features, together with the frontal-alpha asymmetry, were extracted using Complex Morlet Wavelet analysis. Classification results with these features showed an accuracy of 79.3% on average when doing random cross-validation, and 73.3% when applying strategic cross-validation. We extracted a second set of features from the amplitude time-series correlation analysis, which significantly enhanced random cross-validation accuracy while showing similar performance to spectral features when doing strategic cross-validation. These results suggest that complex emotions show distinct electrophysiological correlates, which paves the way for future EEG-based, real-time neurofeedback training of complex emotional states.Significance statementThere is still little understanding about the correlates of high-order emotions (i.e., anguish and tenderness) in the physiological signals recorded with the EEG. Most studies have investigated emotions using functional magnetic resonance imaging (fMRI), including the real-time application in neurofeedback training. However, concerning the therapeutic application, EEG is a more suitable tool with regards to costs and practicability. Therefore, our proof-of-concept study aims at establishing a method for classifying complex emotions that can be later used for EEG-based neurofeedback on emotion regulation. We recorded EEG signals during a multimodal, near-immersive emotion-elicitation experiment. Results demonstrate that intraindividual classification of discrete emotions with features extracted from the EEG is feasible and may be implemented in real-time to enable neurofeedback.

2021 ◽  
Vol 15 ◽  
Author(s):  
Eleonora De Filippi ◽  
Mara Wolter ◽  
Bruno R. P. Melo ◽  
Carlos J. Tierra-Criollo ◽  
Tiago Bortolini ◽  
...  

During the last decades, neurofeedback training for emotional self-regulation has received significant attention from scientific and clinical communities. Most studies have investigated emotions using functional magnetic resonance imaging (fMRI), including the real-time application in neurofeedback training. However, the electroencephalogram (EEG) is a more suitable tool for therapeutic application. Our study aims at establishing a method to classify discrete complex emotions (e.g., tenderness and anguish) elicited through a near-immersive scenario that can be later used for EEG-neurofeedback. EEG-based affective computing studies have mainly focused on emotion classification based on dimensions, commonly using passive elicitation through single-modality stimuli. Here, we integrated both passive and active elicitation methods. We recorded electrophysiological data during emotion-evoking trials, combining emotional self-induction with a multimodal virtual environment. We extracted correlational and time-frequency features, including frontal-alpha asymmetry (FAA), using Complex Morlet Wavelet convolution. Thinking about future real-time applications, we performed within-subject classification using 1-s windows as samples and we applied trial-specific cross-validation. We opted for a traditional machine-learning classifier with low computational complexity and sufficient validation in online settings, the Support Vector Machine. Results of individual-based cross-validation using the whole feature sets showed considerable between-subject variability. The individual accuracies ranged from 59.2 to 92.9% using time-frequency/FAA and 62.4 to 92.4% using correlational features. We found that features of the temporal, occipital, and left-frontal channels were the most discriminative between the two emotions. Our results show that the suggested pipeline is suitable for individual-based classification of discrete emotions, paving the way for future personalized EEG-neurofeedback training.


2015 ◽  
Vol 112 (32) ◽  
pp. 9978-9983 ◽  
Author(s):  
David Calligaris ◽  
Daniel R. Feldman ◽  
Isaiah Norton ◽  
Olutayo Olubiyi ◽  
Armen N. Changelian ◽  
...  

We present a proof of concept study designed to support the clinical development of mass spectrometry imaging (MSI) for the detection of pituitary tumors during surgery. We analyzed by matrix-assisted laser desorption/ionization (MALDI) MSI six nonpathological (NP) human pituitary glands and 45 hormone secreting and nonsecreting (NS) human pituitary adenomas. We show that the distribution of pituitary hormones such as prolactin (PRL), growth hormone (GH), adrenocorticotropic hormone (ACTH), and thyroid stimulating hormone (TSH) in both normal and tumor tissues can be assessed by using this approach. The presence of most of the pituitary hormones was confirmed by using MS/MS and pseudo-MS/MS methods, and subtyping of pituitary adenomas was performed by using principal component analysis (PCA) and support vector machine (SVM). Our proof of concept study demonstrates that MALDI MSI could be used to directly detect excessive hormonal production from functional pituitary adenomas and generally classify pituitary adenomas by using statistical and machine learning analyses. The tissue characterization can be completed in fewer than 30 min and could therefore be applied for the near-real-time detection and delineation of pituitary tumors for intraoperative surgical decision-making.


Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1136
Author(s):  
Duc Long Duong ◽  
Quoc Duy Nam Nguyen ◽  
Minh Son Tong ◽  
Manh Tuan Vu ◽  
Joseph Dy Lim ◽  
...  

Dental caries has been considered the heaviest worldwide oral health burden affecting a significant proportion of the population. To prevent dental caries, an appropriate and accurate early detection method is demanded. This proof-of-concept study aims to develop a two-stage computational system that can detect early occlusal caries from smartphone color images of unrestored extracted teeth according to modified International Caries Detection and Assessment System (ICDAS) criteria (3 classes: Code 0; Code 1-2; Code 3-6): in the first stage, carious lesion areas were identified and extracted from sound tooth regions. Then, five characteristic features of these areas were intendedly selected and calculated to be inputted into the classification stage, where five classifiers (Support Vector Machine, Random Forests, K-Nearest Neighbors, Gradient Boosted Tree, Logistic Regression) were evaluated to determine the best one among them. On a set of 587 smartphone images of extracted teeth, our system achieved accuracy, sensitivity, and specificity that were 87.39%, 89.88%, and 68.86% in the detection stage when compared to modified visual and image-based ICDAS criteria. For the classification stage, the Support Vector Machine model was recorded as the best model with accuracy, sensitivity, and specificity at 88.76%, 92.31%, and 85.21%. As the first step in developing the technology, our present findings confirm the feasibility of using smartphone color images to employ Artificial Intelligence algorithms in caries detection. To improve the performance of the proposed system, there is a need for further development in both in vitro and in vivo modeling. Besides that, an applicable system for accurately taking intra-oral images that can capture entire dental arches including the occlusal surfaces of premolars and molars also needs to be developed.


Author(s):  
Konstantinos Exarchos ◽  
Dimitrios Potonos ◽  
Agapi Aggelopoulou ◽  
Agni Sioutkou ◽  
Konstantinos Kostikas

PLoS ONE ◽  
2018 ◽  
Vol 13 (10) ◽  
pp. e0203044 ◽  
Author(s):  
C. Stönner ◽  
A. Edtbauer ◽  
B. Derstroff ◽  
E. Bourtsoukidis ◽  
T. Klüpfel ◽  
...  

2013 ◽  
Vol 27 (1) ◽  
pp. 138-148 ◽  
Author(s):  
Annette Beatrix Brühl ◽  
Sigrid Scherpiet ◽  
James Sulzer ◽  
Philipp Stämpfli ◽  
Erich Seifritz ◽  
...  

Mekatronika ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 27-31
Author(s):  
Ken-ji Ee ◽  
Ahmad Fakhri Bin Ab. Nasir ◽  
Anwar P. P. Abdul Majeed ◽  
Mohd Azraai Mohd Razman ◽  
Nur Hafieza Ismail

The animal classification system is a technology to classify the animal class (type) automatically and useful in many applications. There are many types of learning models applied to this technology recently. Nonetheless, it is worth noting that the extraction of the features and the classification of the animal features is non-trivial, particularly in the deep learning approach for a successful animal classification system. The use of Transfer Learning (TL) has been demonstrated to be a powerful tool in the extraction of essential features. However, the employment of such a method towards animal classification applications are somewhat limited. The present study aims to determine a suitable TL-conventional classifier pipeline for animal classification. The VGG16 and VGG19 were used in extracting features and then coupled with either k-Nearest Neighbour (k-NN) or Support Vector Machine (SVM) classifier. Prior to that, a total of 4000 images were gathered consisting of a total of five classes which are cows, goats, buffalos, dogs, and cats. The data was split into the ratio of 80:20 for train and test. The classifiers hyper parameters are tuned by the Grids Search approach that utilises the five-fold cross-validation technique. It was demonstrated from the study that the best TL pipeline identified is the VGG16 along with an optimised SVM, as it was able to yield an average classification accuracy of 0.975. The findings of the present investigation could facilitate animal classification application, i.e. for monitoring animals in wildlife.


Sign in / Sign up

Export Citation Format

Share Document