scholarly journals Mapping neural activity patterns to contextualized fearful facial expressions onto callous-unemotional (CU) traits: intersubject representational similarity analysis reveals less variation among high-CU adolescents

2020 ◽  
Vol 3 ◽  
Author(s):  
Shawn A. Rhoads ◽  
Elise M. Cardinale ◽  
Katherine O’Connell ◽  
Amy L. Palmer ◽  
John W. VanMeter ◽  
...  

Abstract Callous-unemotional (CU) traits are early-emerging personality features characterized by deficits in empathy, concern for others, and remorse following social transgressions. One of the interpersonal deficits most consistently associated with CU traits is impaired behavioral and neurophysiological responsiveness to fearful facial expressions. However, the facial expression paradigms traditionally employed in neuroimaging are often ambiguous with respect to the nature of threat (i.e., is the perceiver the threat, or is something else in the environment?). In the present study, 30 adolescents with varying CU traits viewed fearful facial expressions cued to three different contexts (“afraid for you,” “afraid of you,” “afraid for self”) while undergoing functional magnetic resonance imaging (fMRI). Univariate analyses found that mean right amygdala activity during the “afraid for self” context was negatively associated with CU traits. With the goal of disentangling idiosyncratic stimulus-driven neural responses, we employed intersubject representational similarity analysis to link intersubject similarities in multivoxel neural response patterns to contextualized fearful expressions with differential intersubject models of CU traits. Among low-CU adolescents, neural response patterns while viewing fearful faces were most consistently similar early in the visual processing stream and among regions implicated in affective responding, but were more idiosyncratic as emotional face information moved up the cortical processing hierarchy. By contrast, high-CU adolescents’ neural response patterns consistently aligned along the entire cortical hierarchy (but diverged among low-CU youths). Observed patterns varied across contexts, suggesting that interpretations of fearful expressions depend to an extent on neural response patterns and are further shaped by levels of CU traits.

2018 ◽  
Author(s):  
Dean Fido

Deficiency in long-chain omega-3 polyunsaturated fatty acids, in particular eicosapentaenoic acid (EPA), is implicated in aggression and callous-unemotional (CU) traits. A violence inhibition mechanism (VIM) has been proposed to regulate aggression through responding to expressions of distress. However, it remains unclear whether EPA intake is related to the VIM, and if so, whether this pathway can mediate relationships between EPA intake and deviant personality traits. The current investigation documents two, independently-sampled studies that tested relationships between EPA intake, personality (aggression, CU traits), and electrophysiological indices of the VIM (motor extinction cued by facial expressions of distress). In study one, 98 participants completed a food-frequency questionnaire, the inventory of callous-unemotional traits, and an aggression questionnaire. EPA intake was negatively correlated with physical aggression, even after controlling for age and sex. In study two, 47 participants completed the same measures in addition to having electroencephalography recorded during a novel paradigm assessing the distinct processing stages of the VIM. Stop-P300 (motor extinction) responses to facial expressions of distress mediated the relationship between EPA intake and physical aggression. For the first time, we have evidenced an association between EPA intake and indices of distress-induced motor extinction proficiency. Findings are in line with a proposed role of EPA in regulating aggression, possibly through associations with networks involved in distress-cued executive control over behaviour. Results are discussed in terms of the potential benefit of nutritional supplementation in clinical and forensic arenas. Data and a pre-print of this manuscript are available here: https://osf.io/u3jdc/?view_only=b7f32cd1798344b5a6dffa2892253392


Author(s):  
Virginia Carter Leno ◽  
Rachael Bedford ◽  
Susie Chandler ◽  
Pippa White ◽  
Isabel Yorke ◽  
...  

Abstract Research suggests an increased prevalence of callous-unemotional (CU) traits in children with autism spectrum disorder (ASD), and a similar impairment in fear recognition to that reported in non-ASD populations. However, past work has used measures not specifically designed to measure CU traits and has not examined whether decreased attention to the eyes reported in non-ASD populations is also present in individuals with ASD. The current paper uses a measure specifically designed to measure CU traits to estimate prevalence in a large community-based ASD sample. Parents of 189 adolescents with ASD completed questionnaires assessing CU traits, and emotional and behavioral problems. A subset of participants completed a novel emotion recognition task (n = 46). Accuracy, reaction time, total looking time, and number of fixations to the eyes and mouth were measured. Twenty-two percent of youth with ASD scored above a cut-off expected to identify the top 6% of CU scores. CU traits were associated with longer reaction times to identify fear and fewer fixations to the eyes relative to the mouth during the viewing of fearful faces. No associations were found with accuracy or total looking time. Results suggest the mechanisms that underpin CU traits may be similar between ASD and non-ASD populations.


2019 ◽  
Vol 29 (10) ◽  
pp. 1441-1451 ◽  
Author(s):  
Melina Nicole Kyranides ◽  
Kostas A. Fanti ◽  
Maria Petridou ◽  
Eva R. Kimonis

AbstractIndividuals with callous-unemotional (CU) traits show deficits in facial emotion recognition. According to preliminary research, this impairment may be due to attentional neglect to peoples’ eyes when evaluating emotionally expressive faces. However, it is unknown whether this atypical processing pattern is unique to established variants of CU traits or modifiable with intervention. This study examined facial affect recognition and gaze patterns among individuals (N = 80; M age = 19.95, SD = 1.01 years; 50% female) with primary vs secondary CU variants. These groups were identified based on repeated measurements of conduct problems, CU traits, and anxiety assessed in adolescence and adulthood. Accuracy and number of fixations on areas of interest (forehead, eyes, and mouth) while viewing six dynamic emotions were assessed. A visual probe was used to direct attention to various parts of the face. Individuals with primary and secondary CU traits were less accurate than controls in recognizing facial expressions across all emotions. Those identified in the low-anxious primary-CU group showed reduced overall fixations to fearful and painful facial expressions compared to those in the high-anxious secondary-CU group. This difference was not specific to a region of the face (i.e. eyes or mouth). Findings point to the importance of investigating both accuracy and eye gaze fixations, since individuals in the primary and secondary groups were only differentiated in the way they attended to specific facial expression. These findings have implications for differentiated interventions focused on improving facial emotion recognition with regard to attending and correctly identifying emotions.


2020 ◽  
Vol 45 (7) ◽  
pp. 601-608
Author(s):  
Fábio Silva ◽  
Nuno Gomes ◽  
Sebastian Korb ◽  
Gün R Semin

Abstract Exposure to body odors (chemosignals) collected under different emotional states (i.e., emotional chemosignals) can modulate our visual system, biasing visual perception. Recent research has suggested that exposure to fear body odors, results in a generalized faster access to visual awareness of different emotional facial expressions (i.e., fear, happy, and neutral). In the present study, we aimed at replicating and extending these findings by exploring if these effects are limited to fear odor, by introducing a second negative body odor—that is, disgust. We compared the time that 3 different emotional facial expressions (i.e., fear, disgust, and neutral) took to reach visual awareness, during a breaking continuous flash suppression paradigm, across 3 body odor conditions (i.e., fear, disgust, and neutral). We found that fear body odors do not trigger an overall faster access to visual awareness, but instead sped-up access to awareness specifically for facial expressions of fear. Disgust odor, on the other hand, had no effects on awareness thresholds of facial expressions. These findings contrast with prior results, suggesting that the potential of fear body odors to induce visual processing adjustments is specific to fear cues. Furthermore, our results support a unique ability of fear body odors in inducing such visual processing changes, compared with other negative emotional chemosignals (i.e., disgust). These conclusions raise interesting questions as to how fear odor might interact with the visual processing stream, whilst simultaneously giving rise to future avenues of research.


2010 ◽  
Vol 22 (7) ◽  
pp. 1570-1582 ◽  
Author(s):  
Vaidehi S. Natu ◽  
Fang Jiang ◽  
Abhijit Narvekar ◽  
Shaiyan Keshvari ◽  
Volker Blanz ◽  
...  

We examined the neural response patterns for facial identity independent of viewpoint and for viewpoint independent of identity. Neural activation patterns for identity and viewpoint were collected in an fMRI experiment. Faces appeared in identity-constant blocks, with variable viewpoint, and in viewpoint-constant blocks, with variable identity. Pattern-based classifiers were used to discriminate neural response patterns for all possible pairs of identities and viewpoints. To increase the likelihood of detecting distinct neural activation patterns for identity, we tested maximally dissimilar “face”–“antiface” pairs and normal face pairs. Neural response patterns for four of six identity pairs, including the “face”–“antiface” pairs, were discriminated at levels above chance. A behavioral experiment showed accord between perceptual and neural discrimination, indicating that the classifier tapped a high-level visual identity code. Neural activity patterns across a broad span of ventral temporal (VT) cortex, including fusiform gyrus and lateral occipital areas (LOC), were required for identity discrimination. For viewpoint, five of six viewpoint pairs were discriminated neurally. Viewpoint discrimination was most accurate with a broad span of VT cortex, but the neural and perceptual discrimination patterns differed. Less accurate discrimination of viewpoint, more consistent with human perception, was found in right posterior superior temporal sulcus, suggesting redundant viewpoint codes optimized for different functions. This study provides the first evidence that it is possible to dissociate neural activation patterns for identity and viewpoint independently.


2021 ◽  
Vol 11 (10) ◽  
pp. 1342
Author(s):  
Luna C. Muñoz Centifanti ◽  
Timothy R. Stickle ◽  
Jamila Thomas ◽  
Amanda Falcón ◽  
Nicholas D. Thomson ◽  
...  

The ability to efficiently recognize the emotions on others’ faces is something that most of us take for granted. Children with callous-unemotional (CU) traits and impulsivity/conduct problems (ICP), such as attention-deficit hyperactivity disorder, have been previously described as being “fear blind”. This is also associated with looking less at the eye regions of fearful faces, which are highly diagnostic. Previous attempts to intervene into emotion recognition strategies have not had lasting effects on participants’ fear recognition abilities. Here we present both (a) additional evidence that there is a two-part causal chain, from personality traits to face recognition strategies using the eyes, then from strategies to rates of recognizing fear in others; and (b) a pilot intervention that had persistent effects for weeks after the end of instruction. Further, the intervention led to more change in those with the highest CU traits. This both clarifies the specific mechanisms linking personality to emotion recognition and shows that the process is fundamentally malleable. It is possible that such training could promote empathy and reduce the rates of antisocial behavior in specific populations in the future.


2016 ◽  
Author(s):  
Jörn Diedrichsen ◽  
Nikolaus Kriegeskorte

AbstractRepresentational models specify how activity patterns in populations of neurons (or, more generally, in multivariate brain-activity measurements) relate to sensory stimuli, motor responses, or cognitive processes. In an experimental context, representational models can be defined as hypotheses about the distribution of activity profiles across experimental conditions. Currently, three different methods are being used to test such hypotheses: encoding analysis, pattern component modeling (PCM), and representational similarity analysis (RSA). Here we develop a common mathematical framework for understanding the relationship of these three methods, which share one core commonality: all three evaluate the second moment of the distribution of activity profiles, which determines the representational geometry, and thus how well any feature can be decoded from population activity with any readout mechanism capable of a linear transform. Using simulated data for three different experimental designs, we compare the power of the methods to adjudicate between competing representational models. PCM implements a likelihood-ratio test and therefore provides the most powerful test if its assumptions hold. However, the other two approaches – when conducted appropriately – can perform similarly. In encoding analysis, the linear model needs to be appropriately regularized, which effectively imposes a prior on the activity profiles. With such a prior, an encoding model specifies a well-defined distribution of activity profiles. In RSA, the unequal variances and statistical dependencies of the dissimilarity estimates need to be taken into account to reach near-optimal power in inference. The three methods render different aspects of the information explicit (e.g. single-response tuning in encoding analysis and population-response representational dissimilarity in RSA) and have specific advantages in terms of computational demands, ease of use, and extensibility. The three methods are properly construed as complementary components of a single data-analytical toolkit for understanding neural representations on the basis of multivariate brain-activity data.Author SummaryModern neuroscience can measure activity of many neurons or the local blood oxygenation of many brain locations simultaneously. As the number of simultaneous measurements grows, we can better investigate how the brain represents and transforms information, to enable perception, cognition, and behavior. Recent studies go beyond showing that a brain region is involved in some function. They use representational models that specify how different perceptions, cognitions, and actions are encoded in brain-activity patterns. In this paper, we provide a general mathematical framework for such representational models, which clarifies the relationships between three different methods that are currently used in the neuroscience community. All three methods evaluate the same core feature of the data, but each has distinct advantages and disadvantages. Pattern component modelling (PCM) implements the most powerful test between models, and is analytically tractable and expandable. Representational similarity analysis (RSA) provides a highly useful summary statistic (the dissimilarity) and enables model comparison with weaker distributional assumptions. Finally, encoding models characterize individual responses and enable the study of their layout across cortex. We argue that these methods should be considered components of a larger toolkit for testing hypotheses about the way the brain represents information.


2018 ◽  
Author(s):  
Ming Bo Cai ◽  
Nicolas W. Schuck ◽  
Jonathan W. Pillow ◽  
Yael Niv

AbstractThe activity of neural populations in the brains of humans and animals can exhibit vastly different spatial patterns when faced with different tasks or environmental stimuli. The degree of similarity between these neural activity patterns in response to different events is used to characterize the representational structure of cognitive states in a neural population. The dominant methods of investigating this similarity structure first estimate neural activity patterns from noisy neural imaging data using linear regression, and then examine the similarity between the estimated patterns. Here, we show that this approach introduces spurious bias structure in the resulting similarity matrix, in particular when applied to fMRI data. This problem is especially severe when the signal-to-noise ratio is low and in cases where experimental conditions cannot be fully randomized in a task. We propose Bayesian Representational Similarity Analysis (BRSA), an alternative method for computing representational similarity, in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data. By marginalizing over the unknown activity patterns, we can directly estimate this covariance structure from imaging data. This method offers significant reductions in bias and allows estimation of neural representational similarity with previously unattained levels of precision at low signal-to-noise ratio. The probabilistic framework allows for jointly analyzing data from a group of participants. The method can also simultaneously estimate a signal-to-noise ratio map that shows where the learned representational structure is supported more strongly. Both this map and the learned covariance matrix can be used as a structured prior for maximum a posteriori estimation of neural activity patterns, which can be further used for fMRI decoding. We make our tool freely available in Brain Imaging Analysis Kit (BrainIAK).Author summaryWe show the severity of the bias introduced when performing representational similarity analysis (RSA) based on neural activity pattern estimated within imaging runs. Our Bayesian RSA method significantly reduces the bias and can learn a shared representational structure across multiple participants. We also demonstrate its extension as a new multi-class decoding tool.


Author(s):  
*Luna C. Muñoz Centifanti ◽  
*Timothy R. Stickle ◽  
Jamila Thomas ◽  
Amanda Falcón ◽  
Nicholas D. Thomson ◽  
...  

The ability to efficiently recognize the emotions on others’ faces is something that most of us take for granted. Children with callous-unemotional (CU) traits and impulsivity/conduct problems (ICP), such as attention-deficit hyperactivity disorder, have been previously described as being “fear blind”. This is also associated with looking less at the eye regions of fearful faces, which are highly diagnostic. Previous attempts to intervene into emotion recognition strategies have not had lasting effects on participants’ fear recognition abilities. Here we present both (a) additional evidence that there is a two-part causal chain, from personality traits to face recognition strategies using the eyes, then from strategies to rates of recognizing fear in others; and (b) a pilot intervention that had persistent effects for weeks after the end of instruction. Further, the intervention led to more change in those with the highest CU traits. This both clarifies the specific mechanisms linking personality to emotion recognition and shows that the process is fundamentally malleable. It is possible that such training could promote empathy and reduce the rates of antisocial behavior in specific populations in the future.


2018 ◽  
Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Carolyn McGettigan ◽  
Lúcia Garrido

AbstractFace-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by either a face or a voice, and that invariantly represent naturalistically varying face and voice tokens of the same identity. According to two distinct models, such representations could exist either in multimodal brain regions (Campanella and Belin, 2007) or in face-selective brain regions via direct coupling between face- and voice-selective regions (von Kriegstein et al., 2005). To test the predictions of these two models, we used fMRI to measure brain activity patterns elicited by the faces and voices of familiar people in multimodal, face-selective and voice-selective brain regions. We used representational similarity analysis (RSA) to compare the representational geometries of face- and voice-elicited person-identities, and to investigate the degree to which pattern discriminants for pairs of identities generalise from one modality to the other. We found no matching geometries for faces and voices in any brain regions. However, we showed crossmodal generalisation of the pattern discriminants in the multimodal right posterior superior temporal sulcus (rpSTS), suggesting a modality-general person-identity representation in this region. Importantly, the rpSTS showed invariant representations of face- and voice-identities, in that discriminants were trained and tested on independent face videos (different viewpoint, lighting, background) and voice recordings (different vocalizations). Our findings support the Multimodal Processing Model, which proposes that face and voice information is integrated in multimodal brain regions.Significance statementIt is possible to identify a familiar person either by looking at their face or by listening to their voice. Using fMRI and representational similarity analysis (RSA) we show that the right posterior superior sulcus (rpSTS), a multimodal brain region that responds to both faces and voices, contains representations that can distinguish between familiar people independently of whether we are looking at their face or listening to their voice. Crucially, these representations generalised across different particular face videos and voice recordings. Our findings suggest that identity information from visual and auditory processing systems is combined and integrated in the multimodal rpSTS region.


Sign in / Sign up

Export Citation Format

Share Document