scholarly journals Neural Representations Behind ‘Social Norm’ Inferences In Humans

2017 ◽  
Author(s):  
Felipe Pegado ◽  
Michelle H.A. Hendriks ◽  
Steffie Amelynck ◽  
Nicky Daniels ◽  
Jessica Bulthé ◽  
...  

AbstractHumans are highly skilled in social reasoning, e.g., inferring thoughts of others. This mentalizing ability systematically recruits brain regions such as Temporo-Parietal Junction (TPJ), Precuneus (PC) and medial Prefrontal Cortex (mPFC). Further, posterior mPFC is associated with allocentric mentalizing and conflict monitoring while anterior mPFC is associated with self-related mentalizing. Here we extend this work to how we reason not just about what one person thinks but about the abstract shared social norm. We apply functional magnetic resonance imaging to investigate neural representations while participants judge the social congruency between emotional auditory in relation to visual scenes according to how ‘most people’ would perceive it. Behaviorally, judging according to a social norm increased the similarity of response patterns among participants. Multivoxel pattern analysis revealed that social congruency information was not represented in visual and auditory areas, but was clear in most parts of the mentalizing network: TPJ, PC and posterior (but not anterior) mPFC. Furthermore, interindividual variability in anterior mPFC representations was inversely related to the behavioral ability to adjust to the social norm. Our results suggest that social norm inferencing is associated with a distributed and partially individually specific representation of social congruency in the mentalizing network.


2021 ◽  
pp. 095679762110218
Author(s):  
Ryuhei Ueda ◽  
Nobuhito Abe

Having an intimate romantic relationship is an important aspect of life. Dopamine-rich reward regions, including the nucleus accumbens (NAcc), have been identified as neural correlates for both emotional bonding with the partner and interest in unfamiliar attractive nonpartners. Here, we aimed to disentangle the overlapping functions of the NAcc using multivoxel pattern analysis, which can decode the cognitive processes encoded in particular neural activity. During functional MRI scanning, 46 romantically involved men performed the social-incentive-delay task, in which a successful response resulted in the presentation of a dynamic and positive facial expression from their partner and unfamiliar women. Multivoxel pattern analysis revealed that the spatial patterns of NAcc activity could successfully discriminate between romantic partners and unfamiliar women during the period in which participants anticipated the target presentation. We speculate that neural activity patterns within the NAcc represent the relationship partner, which might be a key neural mechanism for committed romantic relationships.



2017 ◽  
Author(s):  
Ashley Prichard ◽  
Peter F. Cook ◽  
Mark Spivak ◽  
Raveena Chhibber ◽  
Gregory S. Berns

AbstractHow do dogs understand human words? At a basic level, understanding would require the discrimination of words from non-words. To determine the mechanisms of such a discrimination, we trained 12 dogs to retrieve two objects based on object names, then probed the neural basis for these auditory discriminations using awake-fMRI. We compared the neural response to these trained words relative to “oddball” pseudowords the dogs had not heard before. Consistent with novelty detection, we found greater activation for pseudowords relative to trained words bilaterally in the parietotemporal cortex. To probe the neural basis for representations of trained words, searchlight multivoxel pattern analysis (MVPA) revealed that a subset of dogs had clusters of informative voxels that discriminated between the two trained words. These clusters included the left temporal cortex and amygdala, left caudate nucleus, and thalamus. These results demonstrate that dogs’ processing of human words utilizes basic processes like novelty detection, and for some dogs, may also include auditory and hedonic representations.



2021 ◽  
Author(s):  
Trung Quang Pham ◽  
Takaaki Yoshimoto ◽  
Haruki Niwa ◽  
Haruka K Takahashi ◽  
Ryutaro Uchiyama ◽  
...  

AbstractHumans and now computers can derive subjective valuations from sensory events although such transformation process is essentially unknown. In this study, we elucidated unknown neural mechanisms by comparing convolutional neural networks (CNNs) to their corresponding representations in humans. Specifically, we optimized CNNs to predict aesthetic valuations of paintings and examined the relationship between the CNN representations and brain activity via multivoxel pattern analysis. Primary visual cortex and higher association cortex activities were similar to computations in shallow CNN layers and deeper layers, respectively. The vision-to-value transformation is hence proved to be a hierarchical process which is consistent with the principal gradient that connects unimodal to transmodal brain regions (i.e. default mode network). The activity of the frontal and parietal cortices was approximated by goal-driven CNN. Consequently, representations of the hidden layers of CNNs can be understood and visualized by their correspondence with brain activity–facilitating parallels between artificial intelligence and neuroscience.



2015 ◽  
Vol 27 (10) ◽  
pp. 2000-2018 ◽  
Author(s):  
Marie St-Laurent ◽  
Hervé Abdi ◽  
Bradley R. Buchsbaum

According to the principle of reactivation, memory retrieval evokes patterns of brain activity that resemble those instantiated when an event was first experienced. Intuitively, one would expect neural reactivation to contribute to recollection (i.e., the vivid impression of reliving past events), but evidence of a direct relationship between the subjective quality of recollection and multiregional reactivation of item-specific neural patterns is lacking. The current study assessed this relationship using fMRI to measure brain activity as participants viewed and mentally replayed a set of short videos. We used multivoxel pattern analysis to train a classifier to identify individual videos based on brain activity evoked during perception and tested how accurately the classifier could distinguish among videos during mental replay. Classification accuracy correlated positively with memory vividness, indicating that the specificity of multivariate brain patterns observed during memory retrieval was related to the subjective quality of a memory. In addition, we identified a set of brain regions whose univariate activity during retrieval predicted both memory vividness and the strength of the classifier's prediction irrespective of the particular video that was retrieved. Our results establish distributed patterns of neural reactivation as a valid and objective marker of the quality of recollection.



2018 ◽  
Author(s):  
Giulia V. Elli ◽  
Connor Lane ◽  
Marina Bedny

AbstractWhat is the neural organization of the mental lexicon? Previous research suggests that partially distinct cortical networks are active during verb and noun processing. Are these networks preferentially involved in representing the meanings of verbs as opposed to nouns? We used multivoxel pattern analysis (MVPA) to investigate whether brain regions that are more active during verb than noun processing are also more sensitive to distinctions among their preferred lexical class. Participants heard four types of verbs (light emission, sound emission, hand-related actions, mouth-related actions) and four types of nouns (birds, mammals, manmade places, natural places). As previously shown, the left posterior middle temporal gyrus (LMTG) and inferior frontal gyrus (LIFG) responded more to verbs, whereas areas in the inferior parietal lobule (LIP), precuneus (LPC), and inferior temporal (LIT) cortex responded more to nouns. MVPA revealed a double-dissociation in semantic sensitivity: classification was more accurate among verbs than nouns in the LMTG, and among nouns than verbs in the LIP, LPC, and LIT. However, classification was similar for verbs and nouns in the LIFG, and above chance for the non-preferred category in all regions. These results suggest that the meanings of verbs and nouns are represented in partially non-overlapping networks.



2019 ◽  
Author(s):  
Vincent Taschereau-Dumouchel ◽  
Mitsuo Kawato ◽  
Hakwan Lau

AbstractIn studies of anxiety and other affective disorders, objectively measured physiological responses have commonly been used as a proxy for measuring subjective experiences associated with pathology. However, this commonly adopted ‘biosignal’ approach has recently been called into question on the grounds that subjective experiences and objective physiological responses may dissociate. We performed machine-learning based analysis on functional magnetic resonance imaging (fMRI) data to assess this issue in the case of fear. Participants were presented with pictures of commonly feared animals in an fMRI experiment. Multivoxel brain activity decoders were trained to predict participants’ subjective fear ratings and their skin conductance reactivity, respectively. While subjective fear and objective physiological responses were correlated in general, the respective whole-brain multivoxel decoders for the two measures were not identical. Some key brain regions such as the amygdala and insula appear to be primarily involved in the prediction of physiological reactivity, while some regions previously associated with metacognition and conscious perception, including some areas in the prefrontal cortex, appear to be primarily predictive of the subjective experience of fear. The present findings are in support of the recent call for caution in assuming a one-to-one mapping between subjective sufferings and their putative biosignals, despite the clear advantages in the latter’s being objectively and continuously measurable in physiological terms.



2019 ◽  
Vol 25 (10) ◽  
pp. 2342-2354 ◽  
Author(s):  
Vincent Taschereau-Dumouchel ◽  
Mitsuo Kawato ◽  
Hakwan Lau

Abstract In studies of anxiety and other affective disorders, objectively measured physiological responses have commonly been used as a proxy for measuring subjective experiences associated with pathology. However, this commonly adopted “biosignal” approach has recently been called into question on the grounds that subjective experiences and objective physiological responses may dissociate. We performed machine-learning-based analyses on functional magnetic resonance imaging (fMRI) data to assess this issue in the case of fear. Although subjective fear and objective physiological responses were correlated in general, the respective whole-brain multivoxel decoders for the two measures were different. Some key brain regions such as the amygdala and insula appear to be primarily involved in the prediction of physiological reactivity, whereas some regions previously associated with metacognition and conscious perception, including some areas in the prefrontal cortex, appear to be primarily predictive of the subjective experience of fear. The present findings are in support of the recent call for caution in assuming a one-to-one mapping between subjective sufferings and their putative biosignals, despite the clear advantages in the latter’s being objectively and continuously measurable in physiological terms.



2020 ◽  
Author(s):  
Miriam E. Weaverdyck ◽  
Mark Allen Thornton ◽  
Diana Tamir

Each individual experiences mental states in their own idiosyncratic way, yet perceivers are able to accurately understand a huge variety of states across unique individuals. How do they accomplish this feat? Do people think about their own anger in the same ways as another person’s? Is reading about someone’s anxiety the same as seeing it? Here, we test the hypothesis that a common conceptual core unites mental state representations across contexts. Across three studies, participants judged the mental states of multiple targets, including a generic other, the self, a socially close other, and a socially distant other. Participants viewed mental state stimuli in multiple modalities, including written scenarios and images. Using representational similarity analysis, we found that brain regions associated with social cognition expressed stable neural representations of mental states across both targets and modalities. This suggests that people use stable models of mental states across different people and contexts.



2018 ◽  
Author(s):  
Jay Joseph Van Bavel

We review literature from several fields to describe common experimental tasks used to measure human cooperation as well as the theoretical models that have been used to characterize cooperative decision-making, as well as brain regions implicated in cooperation. Building on work in neuroeconomics, we suggest a value-based account may provide the most powerful understanding the psychology and neuroscience of group cooperation. We also review the role of individual differences and social context in shaping the mental processes that underlie cooperation and consider gaps in the literature and potential directions for future research on the social neuroscience of cooperation. We suggest that this multi-level approach provides a more comprehensive understanding of the mental and neural processes that underlie the decision to cooperate with others.



Sign in / Sign up

Export Citation Format

Share Document