scholarly journals Multivoxel Pattern Analysis Reveals Dissociations Between Subjective Fear and Its Physiological Correlates

2019 ◽  
Author(s):  
Vincent Taschereau-Dumouchel ◽  
Mitsuo Kawato ◽  
Hakwan Lau

AbstractIn studies of anxiety and other affective disorders, objectively measured physiological responses have commonly been used as a proxy for measuring subjective experiences associated with pathology. However, this commonly adopted ‘biosignal’ approach has recently been called into question on the grounds that subjective experiences and objective physiological responses may dissociate. We performed machine-learning based analysis on functional magnetic resonance imaging (fMRI) data to assess this issue in the case of fear. Participants were presented with pictures of commonly feared animals in an fMRI experiment. Multivoxel brain activity decoders were trained to predict participants’ subjective fear ratings and their skin conductance reactivity, respectively. While subjective fear and objective physiological responses were correlated in general, the respective whole-brain multivoxel decoders for the two measures were not identical. Some key brain regions such as the amygdala and insula appear to be primarily involved in the prediction of physiological reactivity, while some regions previously associated with metacognition and conscious perception, including some areas in the prefrontal cortex, appear to be primarily predictive of the subjective experience of fear. The present findings are in support of the recent call for caution in assuming a one-to-one mapping between subjective sufferings and their putative biosignals, despite the clear advantages in the latter’s being objectively and continuously measurable in physiological terms.


2019 ◽  
Vol 25 (10) ◽  
pp. 2342-2354 ◽  
Author(s):  
Vincent Taschereau-Dumouchel ◽  
Mitsuo Kawato ◽  
Hakwan Lau

Abstract In studies of anxiety and other affective disorders, objectively measured physiological responses have commonly been used as a proxy for measuring subjective experiences associated with pathology. However, this commonly adopted “biosignal” approach has recently been called into question on the grounds that subjective experiences and objective physiological responses may dissociate. We performed machine-learning-based analyses on functional magnetic resonance imaging (fMRI) data to assess this issue in the case of fear. Although subjective fear and objective physiological responses were correlated in general, the respective whole-brain multivoxel decoders for the two measures were different. Some key brain regions such as the amygdala and insula appear to be primarily involved in the prediction of physiological reactivity, whereas some regions previously associated with metacognition and conscious perception, including some areas in the prefrontal cortex, appear to be primarily predictive of the subjective experience of fear. The present findings are in support of the recent call for caution in assuming a one-to-one mapping between subjective sufferings and their putative biosignals, despite the clear advantages in the latter’s being objectively and continuously measurable in physiological terms.



2021 ◽  
Author(s):  
Trung Quang Pham ◽  
Takaaki Yoshimoto ◽  
Haruki Niwa ◽  
Haruka K Takahashi ◽  
Ryutaro Uchiyama ◽  
...  

AbstractHumans and now computers can derive subjective valuations from sensory events although such transformation process is essentially unknown. In this study, we elucidated unknown neural mechanisms by comparing convolutional neural networks (CNNs) to their corresponding representations in humans. Specifically, we optimized CNNs to predict aesthetic valuations of paintings and examined the relationship between the CNN representations and brain activity via multivoxel pattern analysis. Primary visual cortex and higher association cortex activities were similar to computations in shallow CNN layers and deeper layers, respectively. The vision-to-value transformation is hence proved to be a hierarchical process which is consistent with the principal gradient that connects unimodal to transmodal brain regions (i.e. default mode network). The activity of the frontal and parietal cortices was approximated by goal-driven CNN. Consequently, representations of the hidden layers of CNNs can be understood and visualized by their correspondence with brain activity–facilitating parallels between artificial intelligence and neuroscience.



2015 ◽  
Vol 27 (10) ◽  
pp. 2000-2018 ◽  
Author(s):  
Marie St-Laurent ◽  
Hervé Abdi ◽  
Bradley R. Buchsbaum

According to the principle of reactivation, memory retrieval evokes patterns of brain activity that resemble those instantiated when an event was first experienced. Intuitively, one would expect neural reactivation to contribute to recollection (i.e., the vivid impression of reliving past events), but evidence of a direct relationship between the subjective quality of recollection and multiregional reactivation of item-specific neural patterns is lacking. The current study assessed this relationship using fMRI to measure brain activity as participants viewed and mentally replayed a set of short videos. We used multivoxel pattern analysis to train a classifier to identify individual videos based on brain activity evoked during perception and tested how accurately the classifier could distinguish among videos during mental replay. Classification accuracy correlated positively with memory vividness, indicating that the specificity of multivariate brain patterns observed during memory retrieval was related to the subjective quality of a memory. In addition, we identified a set of brain regions whose univariate activity during retrieval predicted both memory vividness and the strength of the classifier's prediction irrespective of the particular video that was retrieved. Our results establish distributed patterns of neural reactivation as a valid and objective marker of the quality of recollection.



2014 ◽  
Vol 26 (5) ◽  
pp. 955-969 ◽  
Author(s):  
Annelinde R. E. Vandenbroucke ◽  
Johannes J. Fahrenfort ◽  
Ilja G. Sligte ◽  
Victor A. F. Lamme

Every day, we experience a rich and complex visual world. Our brain constantly translates meaningless fragmented input into coherent objects and scenes. However, our attentional capabilities are limited, and we can only report the few items that we happen to attend to. So what happens to items that are not cognitively accessed? Do these remain fragmentary and meaningless? Or are they processed up to a level where perceptual inferences take place about image composition? To investigate this, we recorded brain activity using fMRI while participants viewed images containing a Kanizsa figure, an illusion in which an object is perceived by means of perceptual inference. Participants were presented with the Kanizsa figure and three matched nonillusory control figures while they were engaged in an attentionally demanding distractor task. After the task, one group of participants was unable to identify the Kanizsa figure in a forced-choice decision task; hence, they were “inattentionally blind.” A second group had no trouble identifying the Kanizsa figure. Interestingly, the neural signature that was unique to the processing of the Kanizsa figure was present in both groups. Moreover, within-subject multivoxel pattern analysis showed that the neural signature of unreported Kanizsa figures could be used to classify reported Kanizsa figures and that this cross-report classification worked better for the Kanizsa condition than for the control conditions. Together, these results suggest that stimuli that are not cognitively accessed are processed up to levels of perceptual interpretation.



2017 ◽  
Author(s):  
Ashley Prichard ◽  
Peter F. Cook ◽  
Mark Spivak ◽  
Raveena Chhibber ◽  
Gregory S. Berns

AbstractHow do dogs understand human words? At a basic level, understanding would require the discrimination of words from non-words. To determine the mechanisms of such a discrimination, we trained 12 dogs to retrieve two objects based on object names, then probed the neural basis for these auditory discriminations using awake-fMRI. We compared the neural response to these trained words relative to “oddball” pseudowords the dogs had not heard before. Consistent with novelty detection, we found greater activation for pseudowords relative to trained words bilaterally in the parietotemporal cortex. To probe the neural basis for representations of trained words, searchlight multivoxel pattern analysis (MVPA) revealed that a subset of dogs had clusters of informative voxels that discriminated between the two trained words. These clusters included the left temporal cortex and amygdala, left caudate nucleus, and thalamus. These results demonstrate that dogs’ processing of human words utilizes basic processes like novelty detection, and for some dogs, may also include auditory and hedonic representations.



2018 ◽  
Author(s):  
Giulia V. Elli ◽  
Connor Lane ◽  
Marina Bedny

AbstractWhat is the neural organization of the mental lexicon? Previous research suggests that partially distinct cortical networks are active during verb and noun processing. Are these networks preferentially involved in representing the meanings of verbs as opposed to nouns? We used multivoxel pattern analysis (MVPA) to investigate whether brain regions that are more active during verb than noun processing are also more sensitive to distinctions among their preferred lexical class. Participants heard four types of verbs (light emission, sound emission, hand-related actions, mouth-related actions) and four types of nouns (birds, mammals, manmade places, natural places). As previously shown, the left posterior middle temporal gyrus (LMTG) and inferior frontal gyrus (LIFG) responded more to verbs, whereas areas in the inferior parietal lobule (LIP), precuneus (LPC), and inferior temporal (LIT) cortex responded more to nouns. MVPA revealed a double-dissociation in semantic sensitivity: classification was more accurate among verbs than nouns in the LMTG, and among nouns than verbs in the LIP, LPC, and LIT. However, classification was similar for verbs and nouns in the LIFG, and above chance for the non-preferred category in all regions. These results suggest that the meanings of verbs and nouns are represented in partially non-overlapping networks.



eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Arvid Guterstam ◽  
Branden J Bio ◽  
Andrew I Wilterson ◽  
Michael Graziano

In a traditional view, in social cognition, attention is equated with gaze and people track other people’s attention by tracking their gaze. Here, we used fMRI to test whether the brain represents attention in a richer manner. People read stories describing an agent (either oneself or someone else) directing attention to an object in one of two ways: either internally directed (endogenous) or externally induced (exogenous). We used multivoxel pattern analysis to examine how brain areas within the theory-of-mind network encoded attention type and agent type. Brain activity patterns in the left temporo-parietal junction (TPJ) showed significant decoding of information about endogenous versus exogenous attention. The left TPJ, left superior temporal sulcus (STS), precuneus, and medial prefrontal cortex (MPFC) significantly decoded agent type (self versus other). These findings show that the brain constructs a rich model of one’s own and others’ attentional state, possibly aiding theory of mind.



2017 ◽  
Author(s):  
Felipe Pegado ◽  
Michelle H.A. Hendriks ◽  
Steffie Amelynck ◽  
Nicky Daniels ◽  
Jessica Bulthé ◽  
...  

AbstractHumans are highly skilled in social reasoning, e.g., inferring thoughts of others. This mentalizing ability systematically recruits brain regions such as Temporo-Parietal Junction (TPJ), Precuneus (PC) and medial Prefrontal Cortex (mPFC). Further, posterior mPFC is associated with allocentric mentalizing and conflict monitoring while anterior mPFC is associated with self-related mentalizing. Here we extend this work to how we reason not just about what one person thinks but about the abstract shared social norm. We apply functional magnetic resonance imaging to investigate neural representations while participants judge the social congruency between emotional auditory in relation to visual scenes according to how ‘most people’ would perceive it. Behaviorally, judging according to a social norm increased the similarity of response patterns among participants. Multivoxel pattern analysis revealed that social congruency information was not represented in visual and auditory areas, but was clear in most parts of the mentalizing network: TPJ, PC and posterior (but not anterior) mPFC. Furthermore, interindividual variability in anterior mPFC representations was inversely related to the behavioral ability to adjust to the social norm. Our results suggest that social norm inferencing is associated with a distributed and partially individually specific representation of social congruency in the mentalizing network.



2020 ◽  
Author(s):  
Arvid Guterstam ◽  
Branden J Bio ◽  
Andrew I Wilterson ◽  
Michael SA Graziano

AbstractIn a traditional view, in social cognition, attention is equated with gaze and people track attention by tracking other people’s gaze. Here we used fMRI to test whether the brain represents attention in a richer manner. People read stories describing an agent (either oneself or someone else) directing attention to an object in one of two ways: either internally directed (endogenous) or externally induced (exogenous). We used multivoxel pattern analysis to examine how brain areas within the theory-of-mind network encoded attention type and agent type. Brain activity patterns in the left temporo-parietal junction (TPJ) showed significant decoding of information about endogenous versus exogenous attention. The left TPJ, left superior temporal sulcus (STS), precuneus, and medial prefrontal cortex (MPFC) significantly decoded agent type (self versus other). These findings show that the brain constructs a rich model of one’s own and others’ attentional state, possibly aiding theory of mind.Impact statementThis study used fMRI to show that the human brain encodes other people’s attention in enough richness to distinguish whether that attention was directed exogenously (stimulus-driven) or endogenously (internally driven).



2021 ◽  
Vol 15 ◽  
Author(s):  
Alexander Maÿe ◽  
Tiezhi Wang ◽  
Andreas K. Engel

Hyper-brain studies analyze the brain activity of two or more individuals during some form of interaction. Several studies found signs of inter-subject brain activity coordination, such as power and phase synchronization or information flow. This hyper-brain coordination is frequently studied in paradigms which induce rhythms or even synchronization, e.g., by mirroring movements, turn-based activity in card or economic games, or joint music making. It is therefore interesting to figure out in how far coordinated brain activity may be induced by a rhythmicity in the task and/or the sensory feedback that the partners receive. We therefore studied the EEG brain activity of dyads in a task that required the smooth pursuit of a target and did not involve any extrinsic rhythms. Partners controlled orthogonal axes of the two-dimensional motion of an object that had to be kept on the target. Using several methods for analyzing hyper-brain coupling, we could not detect signs of coordinated brain activity. However, we found several brain regions in which the frequency-specific activity significantly correlated with the objective task performance, the subjective experience thereof, and of the collaboration. Activity in these regions has been linked to motor control, sensorimotor integration, executive control and emotional processing. Our results suggest that neural correlates of intersubjectivity encompass large parts of brain areas that are considered to be involved in sensorimotor control without necessarily coordinating their activity across agents.



Sign in / Sign up

Export Citation Format

Share Document