scholarly journals Inter-subject representational similarity analysis reveals individual variations in affective experience when watching erotic movies

2019 ◽  
Author(s):  
Pin-Hao A. Chen ◽  
Eshin Jolly ◽  
Jin Hyun Cheong ◽  
Luke J. Chang

AbstractWe spend much of our life pursuing or avoiding affective experiences. However, surprisingly little is known about how these experiences are represented in the brain and if they are shared across individuals. Here, we explore variations in the construction of an affective experience during a naturalistic viewing paradigm based on subjective preferences in sociosexual desire and self-control using intersubject representational similarity analysis (IS-RSA). We found that when watching erotic movies, intersubject variations in sociosexual desire preferences of 26 heterosexual males were associated with similarly structured fluctuations in the cortico-striatal reward, default mode, and mentalizing networks. In contrast, variations in the self-control preferences were associated with shared dynamics in the fronto-parietal executive control and cingulo-insula salience networks. Importantly, these results were specific to the affective experience, as we did not observe any relationship with variation in preferences when individuals watched neutral movies. Moreover, these results appear to require multivariate representations of preferences as we did not observe any significant results using single summary scores. Our findings demonstrate that multidimensional variations in individual preferences can be used to uncover unique dimensions of an affective experience, and that IS-RSA can provide new insights into the neural processes underlying psychological experiences elicited through naturalistic experimental designs.

2019 ◽  
Author(s):  
Harry Farmer ◽  
Uri Hertz ◽  
Antonia Hamilton

AbstractDuring our daily lives, we often learn about the similarity of the traits and preferences of others to our own and use that information during our social interactions. However, it is unclear how the brain represents similarity between the self and others. One possible mechanism is to track similarity to oneself regardless of the identity of the other (Similarity account); an alternative is to track each confederate in terms of consistency of the similarity to the self, with respect to the choices they have made before (consistency account). Our study combined fMRI and computational modelling of reinforcement learning (RL) to investigate the neural processes that underlie learning about preference similarity. Participants chose which of two pieces of artwork they preferred and saw the choices of one confederate who usually shared their preference and another who usually did not. We modelled neural activation with RL models based on the similarity and consistency accounts. Data showed more brain regions whose activity pattern fits with the consistency account, specifically, areas linked to reward and social cognition. Our findings suggest that impressions of other people can be calculated in a person-specific manner which assumes that each individual behaves consistently with their past choices.


2016 ◽  
Author(s):  
Jörn Diedrichsen ◽  
Nikolaus Kriegeskorte

AbstractRepresentational models specify how activity patterns in populations of neurons (or, more generally, in multivariate brain-activity measurements) relate to sensory stimuli, motor responses, or cognitive processes. In an experimental context, representational models can be defined as hypotheses about the distribution of activity profiles across experimental conditions. Currently, three different methods are being used to test such hypotheses: encoding analysis, pattern component modeling (PCM), and representational similarity analysis (RSA). Here we develop a common mathematical framework for understanding the relationship of these three methods, which share one core commonality: all three evaluate the second moment of the distribution of activity profiles, which determines the representational geometry, and thus how well any feature can be decoded from population activity with any readout mechanism capable of a linear transform. Using simulated data for three different experimental designs, we compare the power of the methods to adjudicate between competing representational models. PCM implements a likelihood-ratio test and therefore provides the most powerful test if its assumptions hold. However, the other two approaches – when conducted appropriately – can perform similarly. In encoding analysis, the linear model needs to be appropriately regularized, which effectively imposes a prior on the activity profiles. With such a prior, an encoding model specifies a well-defined distribution of activity profiles. In RSA, the unequal variances and statistical dependencies of the dissimilarity estimates need to be taken into account to reach near-optimal power in inference. The three methods render different aspects of the information explicit (e.g. single-response tuning in encoding analysis and population-response representational dissimilarity in RSA) and have specific advantages in terms of computational demands, ease of use, and extensibility. The three methods are properly construed as complementary components of a single data-analytical toolkit for understanding neural representations on the basis of multivariate brain-activity data.Author SummaryModern neuroscience can measure activity of many neurons or the local blood oxygenation of many brain locations simultaneously. As the number of simultaneous measurements grows, we can better investigate how the brain represents and transforms information, to enable perception, cognition, and behavior. Recent studies go beyond showing that a brain region is involved in some function. They use representational models that specify how different perceptions, cognitions, and actions are encoded in brain-activity patterns. In this paper, we provide a general mathematical framework for such representational models, which clarifies the relationships between three different methods that are currently used in the neuroscience community. All three methods evaluate the same core feature of the data, but each has distinct advantages and disadvantages. Pattern component modelling (PCM) implements the most powerful test between models, and is analytically tractable and expandable. Representational similarity analysis (RSA) provides a highly useful summary statistic (the dissimilarity) and enables model comparison with weaker distributional assumptions. Finally, encoding models characterize individual responses and enable the study of their layout across cortex. We argue that these methods should be considered components of a larger toolkit for testing hypotheses about the way the brain represents information.


2021 ◽  
Author(s):  
Huawei Xu ◽  
Ming Liu ◽  
Delong Zhang

Using deep neural networks (DNNs) as models to explore the biological brain is controversial, which is mainly due to the impenetrability of DNNs. Inspired by neural style transfer, we circumvented this problem by using deep features that were given a clear meaning--the representation of the semantic content of an image. Using encoding models and the representational similarity analysis, we quantitatively showed that the deep features which represented the semantic content of an image mainly modulated the activity of voxels in the early visual areas (V1, V2, and V3) and these features were essentially depictive but also propositional. This result is in line with the core viewpoint of the grounded cognition to some extent, which suggested that the representation of information in our brain is essentially depictive and can implement symbolic functions naturally.


2018 ◽  
Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Carolyn McGettigan ◽  
Lúcia Garrido

AbstractFace-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by either a face or a voice, and that invariantly represent naturalistically varying face and voice tokens of the same identity. According to two distinct models, such representations could exist either in multimodal brain regions (Campanella and Belin, 2007) or in face-selective brain regions via direct coupling between face- and voice-selective regions (von Kriegstein et al., 2005). To test the predictions of these two models, we used fMRI to measure brain activity patterns elicited by the faces and voices of familiar people in multimodal, face-selective and voice-selective brain regions. We used representational similarity analysis (RSA) to compare the representational geometries of face- and voice-elicited person-identities, and to investigate the degree to which pattern discriminants for pairs of identities generalise from one modality to the other. We found no matching geometries for faces and voices in any brain regions. However, we showed crossmodal generalisation of the pattern discriminants in the multimodal right posterior superior temporal sulcus (rpSTS), suggesting a modality-general person-identity representation in this region. Importantly, the rpSTS showed invariant representations of face- and voice-identities, in that discriminants were trained and tested on independent face videos (different viewpoint, lighting, background) and voice recordings (different vocalizations). Our findings support the Multimodal Processing Model, which proposes that face and voice information is integrated in multimodal brain regions.Significance statementIt is possible to identify a familiar person either by looking at their face or by listening to their voice. Using fMRI and representational similarity analysis (RSA) we show that the right posterior superior sulcus (rpSTS), a multimodal brain region that responds to both faces and voices, contains representations that can distinguish between familiar people independently of whether we are looking at their face or listening to their voice. Crucially, these representations generalised across different particular face videos and voice recordings. Our findings suggest that identity information from visual and auditory processing systems is combined and integrated in the multimodal rpSTS region.


2014 ◽  
Author(s):  
Grainne Fitzsimons ◽  
Catherine Shea ◽  
Christy Zhou ◽  
Michelle vanDellen
Keyword(s):  
The Self ◽  

2010 ◽  
Author(s):  
Holly Miller ◽  
Kristina F. Pattison ◽  
Rebecca Rayburn-Reeves ◽  
C. Nathan DeWall ◽  
Thomas Zentall
Keyword(s):  
The Self ◽  

Sign in / Sign up

Export Citation Format

Share Document