scholarly journals Towards Semantic fMRI Neurofeedback: Navigating among Mental States using Real-time Representational Similarity Analysis

2020 ◽  
Author(s):  
Andrea G. Russo ◽  
Michael Lührs ◽  
Francesco Di Salle ◽  
Fabrizio Esposito ◽  
Rainer Goebel

AbstractObjectiveReal-time functional magnetic resonance imaging neurofeedback (rt-fMRI-NF) is a non-invasive MRI procedure allowing examined participants to learn to self-regulate brain activity by performing mental tasks. A novel two-step rt-fMRI-NF procedure is proposed whereby the feedback display is updated in real-time based on high level (semantic) representations of experimental stimuli via real-time representational similarity analysis of multi-voxel patterns of brain activity.ApproachIn a localizer session, the stimuli become associated with anchored points on a two-dimensional representational space where distances approximate between-pattern (dis)similarities. In the NF session, participants modulate their brain response, displayed as a movable point, to engage in a specific neural representation. The developed method pipeline is verified in a proof-of-concept rt-fMRI-NF study at 7 Tesla using imagery of concrete objects. The dependence on noise is more systematically assessed on artificial fMRI data with similar (simulated) spatio-temporal structure and variable (injected) signal and noise. A series of brain activity patterns from the ventral visual cortex is evaluated via on-line and off-line analyses and the performances of the method are reported under different noise conditions.Main resultsThe participant in the proof-of-concept study exhibited robust activation patterns in the localizer session and managed to control the neural representation of a stimulus towards the selected target, in the NF session. The offline analyses validated the rt-fMRI-NF results, showing that the rapid convergence to the target representation is noise-dependent.SignificanceOur proof-of-concept study demonstrates the potential of semantic NF designs where the participant navigates among different mental states. Compared to traditional NF designs (e.g. using a thermometer display to set the level of the neural signal), the proposed approach provides content-specific feedback to the participant and extra degrees of freedom to the experimenter enabling real-time control of the neural activity towards a target brain state without suggesting a specific mental strategy to the subject.

2013 ◽  
Vol 27 (1) ◽  
pp. 138-148 ◽  
Author(s):  
Annette Beatrix Brühl ◽  
Sigrid Scherpiet ◽  
James Sulzer ◽  
Philipp Stämpfli ◽  
Erich Seifritz ◽  
...  

2020 ◽  
Author(s):  
Eleonora De Filippi ◽  
Mara Wolter ◽  
Bruno Melo ◽  
Carlos J. Tierra-Criollo ◽  
Tiago Bortolini ◽  
...  

AbstractDuring the last decades, neurofeedback training for emotional self-regulation has received significant attention from both the scientific and clinical communities. However, most studies have focused on broader emotional states such as “negative vs. positive”, primarily due to our poor understanding of the functional anatomy of more complex emotions at the electrophysiological level. Our proof-of-concept study aims at investigating the feasibility of classifying two complex emotions that have been implicated in mental health, namely tenderness and anguish, using features extracted from the electroencephalogram (EEG) signal in healthy participants. Electrophysiological data were recorded from fourteen participants during a block-designed experiment consisting of emotional self-induction trials combined with a multimodal virtual scenario. For the within-subject classification, the linear Support Vector Machine was trained with two sets of samples: random cross-validation of the sliding windows of all trials; and 2) strategic cross-validation, assigning all the windows of one trial to the same fold. Spectral features, together with the frontal-alpha asymmetry, were extracted using Complex Morlet Wavelet analysis. Classification results with these features showed an accuracy of 79.3% on average when doing random cross-validation, and 73.3% when applying strategic cross-validation. We extracted a second set of features from the amplitude time-series correlation analysis, which significantly enhanced random cross-validation accuracy while showing similar performance to spectral features when doing strategic cross-validation. These results suggest that complex emotions show distinct electrophysiological correlates, which paves the way for future EEG-based, real-time neurofeedback training of complex emotional states.Significance statementThere is still little understanding about the correlates of high-order emotions (i.e., anguish and tenderness) in the physiological signals recorded with the EEG. Most studies have investigated emotions using functional magnetic resonance imaging (fMRI), including the real-time application in neurofeedback training. However, concerning the therapeutic application, EEG is a more suitable tool with regards to costs and practicability. Therefore, our proof-of-concept study aims at establishing a method for classifying complex emotions that can be later used for EEG-based neurofeedback on emotion regulation. We recorded EEG signals during a multimodal, near-immersive emotion-elicitation experiment. Results demonstrate that intraindividual classification of discrete emotions with features extracted from the EEG is feasible and may be implemented in real-time to enable neurofeedback.


2019 ◽  
Author(s):  
Lin Wang ◽  
Edward Wlotko ◽  
Edward Alexander ◽  
Lotte Schoot ◽  
Minjae Kim ◽  
...  

AbstractIt has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used Magnetoencephalography (MEG) and Electroencephalography (EEG), in combination with Representational Similarity Analysis (RSA), to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate constraining verbs was greater than following inanimate constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.Significance statementLanguage inputs unfold very quickly during real-time communication. By predicting ahead we can give our brains a “head-start”, so that language comprehension is faster and more efficient. While most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context, “they cautioned the…”, we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.


2018 ◽  
Vol 44 (suppl_1) ◽  
pp. S174-S174
Author(s):  
Natasza Orlov ◽  
Vincent Giampietro ◽  
Owen O’Daly ◽  
Gareth Barker ◽  
Katya Rubia ◽  
...  

2015 ◽  
Vol 112 (32) ◽  
pp. 9978-9983 ◽  
Author(s):  
David Calligaris ◽  
Daniel R. Feldman ◽  
Isaiah Norton ◽  
Olutayo Olubiyi ◽  
Armen N. Changelian ◽  
...  

We present a proof of concept study designed to support the clinical development of mass spectrometry imaging (MSI) for the detection of pituitary tumors during surgery. We analyzed by matrix-assisted laser desorption/ionization (MALDI) MSI six nonpathological (NP) human pituitary glands and 45 hormone secreting and nonsecreting (NS) human pituitary adenomas. We show that the distribution of pituitary hormones such as prolactin (PRL), growth hormone (GH), adrenocorticotropic hormone (ACTH), and thyroid stimulating hormone (TSH) in both normal and tumor tissues can be assessed by using this approach. The presence of most of the pituitary hormones was confirmed by using MS/MS and pseudo-MS/MS methods, and subtyping of pituitary adenomas was performed by using principal component analysis (PCA) and support vector machine (SVM). Our proof of concept study demonstrates that MALDI MSI could be used to directly detect excessive hormonal production from functional pituitary adenomas and generally classify pituitary adenomas by using statistical and machine learning analyses. The tissue characterization can be completed in fewer than 30 min and could therefore be applied for the near-real-time detection and delineation of pituitary tumors for intraoperative surgical decision-making.


2013 ◽  
Vol 442 (1-2) ◽  
pp. 20-26 ◽  
Author(s):  
James D. Stephens ◽  
Brian R. Kowalczyk ◽  
Bruno C. Hancock ◽  
Goldi Kaul ◽  
Cetin Cetinkaya

Sign in / Sign up

Export Citation Format

Share Document