Stable neural representations of mental states across target people and stimulus modalities

2020 ◽  
Author(s):  
Miriam E. Weaverdyck ◽  
Mark Allen Thornton ◽  
Diana Tamir

Each individual experiences mental states in their own idiosyncratic way, yet perceivers are able to accurately understand a huge variety of states across unique individuals. How do they accomplish this feat? Do people think about their own anger in the same ways as another person’s? Is reading about someone’s anxiety the same as seeing it? Here, we test the hypothesis that a common conceptual core unites mental state representations across contexts. Across three studies, participants judged the mental states of multiple targets, including a generic other, the self, a socially close other, and a socially distant other. Participants viewed mental state stimuli in multiple modalities, including written scenarios and images. Using representational similarity analysis, we found that brain regions associated with social cognition expressed stable neural representations of mental states across both targets and modalities. This suggests that people use stable models of mental states across different people and contexts.

2018 ◽  
Author(s):  
Hilary Richardson ◽  
Rebecca Saxe

When we watch movies, we consider the characters’ mental states in order to understand and predict the narrative. Recent work in fMRI uses movie-viewing paradigms to measure functional responses in brain regions recruited for such mental state reasoning (the Theory of Mind (“ToM”) network). Here, two groups of young children (n=30 3-4yo, n=26 6-7yo) viewed a short animated movie twice while undergoing fMRI. As children get older, ToM brain regions were recruited earlier in time during the second presentation of the movie. This “narrative anticipation” effect is specific: there was no such effect in a control network of brain regions that responds just as robustly to the movie (the “Pain Matrix”). These results complement prior studies in adults that suggest that ToM brain regions play a role not just in inferring, but in actively predicting, other people's thoughts and feelings, and provide novel evidence that as children get older, their ToM brain regions increasingly make such predictions. This is a post-peer-review, pre-copyedit version of an article published in Developmental Science. The final authenticated version is available online at: https://doi.org/10.1111/desc.12863 .


Author(s):  
Angelita Wong

Depression is often associated with profound social and interpersonal functioning impairments. Negative interpersonal experiences may lead depressed individuals to withdraw from social interaction, which may in turn exacerbate the depression state (Rippere, 1980). As a result, it is of theoretical and clinical importance to understand the mechanisms underlying these social deficits. Researchers have applied the theory‐of‐mind framework to better understand the impaired social functioning in depressed individuals. Theory of mind refers to the everyday ability to attribute mental states (i.e., beliefs, desires, emotions) to others to both understand and predict their behaviour (Wellman, 1990). Research has found that individuindividuals with dysphoria (i.e. elevated scores on a measure of depression symptoms, but not necessarily a diagnosis of clinical depression) demonstrate enhanced mental state judgments (Harkness, Sabbagh, Jacobson, Chowdrey, & Chen, 2005). This study will determine neural mechanisms that may underlie this phenomenon by examining whether differences in brain activity exist between dysphoric and nondysphoric groups during mental states decoding. I will record electrophysiological data while participants are judging the mental states from pictures of eyes. Based on previous research (Sabbagh, Moulson, & Harkness, 2004), I anticipate that mental state decoding will be associated with the right inferior frontal and right anterior temporal regions of the brain. Furthermore, I hypothesize that dysphoric individuals will have greater activations in these brain regions and make significantly more accurate judgments than nondysphoric individuals when making mental state judgments.


2021 ◽  
Author(s):  
Shaohan Jiang ◽  
Sidong Wang ◽  
Xiaohong Wan

Metacognition and mentalizing are both associated with meta-level mental state representations. Specifically, metacognition refers to monitoring one’s own cognitive processes, while mentalizing refers to monitoring others’ cognitive processes. However, this self-other dichotomy is insufficient to delineate the two high-level mental processes. We here used functional magnetic resonance imaging (fMRI) to systematically investigate the neural representations of different levels of decision uncertainty in monitoring different targets (the current self, the past self, and others) performing a perceptual decision-making task. Our results reveal diverse formats of intrinsic mental state representations of decision uncertainty in mentalizing, separate from the associations with external information. External information was commonly represented in the right inferior parietal lobe (IPL) across the mentalizing tasks. However, the meta-level mental states of decision uncertainty attributed to others were uniquely represented in the dorsomedial prefrontal cortex (dmPFC), rather than the temporoparietal junction (TPJ) that also equivalently represented the object-level mental states of decision inaccuracy attributed to others. Further, the object-level and meta-level mental states of decision uncertainty, when attributed to the past self, were represented in the precuneus and the lateral frontopolar cortex (lFPC), respectively. In contrast, the dorsal anterior cingulate cortex (dACC) consistently represented both decision uncertainty in metacognition and estimate uncertainty during monitoring the different mentalizing processes, but not the inferred decision uncertainty in mentalizing. Hence, our findings identify neural signatures to clearly delineate metacognition and mentalizing and further imply distinct neural computations on the mental states of decision uncertainty during metacognition and mentalizing.


2020 ◽  
Vol 1 (I) ◽  
pp. 1-26
Author(s):  
Chris Letheby

Can there be phenomenal consciousness without self-consciousness? Strong intuitions and prominent theories of consciousness say “no”: experience requires minimal self-awareness, or “subjectivity”. This “subjectivity principle” (SP) faces apparent counterexamples in the form of anomalous mental states claimed to lack self-consciousness entirely, such as “inserted thoughts” in schizophrenia and certain mental states in depersonalization disorder (DPD). However, Billon & Kriegel (2015) have defended SP by arguing (inter alia) that while some of these mental states may be totally selfless, those states are not phenomenally conscious and thus do not constitute genuine counterexamples to SP. I argue that this defence cannot work in relation to certain experiences of ego dissolution induced by potent fast-acting serotonergic psychedelics. These mental states jointly instantiate the two features whose co-instantiation by a single mental state SP prohibits: (a) phenomenal consciousness and (b) total lack of self-consciousness. One possible objection is that these mental states may lack “me-ness” and “mineness” but cannot lack “for-me-ness”, a special inner awareness of mental states by the self. In response I propose a dilemma. For-me-ness can be defined either as containing a genuinely experiential component or as not. On the first horn, for-me-ness is clearly absent (I argue) from my counterexamples. On the second horn, for-me-ness has been defined in a way that conflicts with the claims and methods of its proponents, and the claim that phenomenally conscious mental states can totally lack self-consciousness has been conceded. I conclude with some reflections on the intuitive plausibility of SP in light of evidence from altered states.


2021 ◽  
Vol 12 (1) ◽  
pp. 26-70
Author(s):  
H. Georg Schulze

Abstract Thinking machines must be able to use language effectively in communication with humans. It requires from them the ability to generate meaning and transfer this meaning to a communicating partner. Machines must also be able to decode meaning communicated via language. This work is about meaning in the context of building an artificial general intelligent system. It starts with an analysis of the Turing test and some of the main approaches to explain meaning. It then considers the generation of meaning in the human mind and argues that meaning has a dual nature. The quantum component reflects the relationships between objects and the orthogonal quale component the value of these relationships to the self. Both components are necessary, simultaneously, for meaning to exist. This parallel existence permits the formulation of ‘meaning coordinates’ as ordered pairs of quantum and quale strengths. Meaning coordinates represent the contents of meaningful mental states. Spurred by a currently salient meaningful mental state in the speaker, language is used to induce a meaningful mental state in the hearer. Therefore, thinking machines must be able to produce and respond to meaningful mental states in ways similar to their functioning in humans. It is explained how quanta and qualia arise, how they generate meaningful mental states, how these states propagate to produce thought, how they are communicated and interpreted, and how they can be simulated to create thinking machines.


2009 ◽  
Vol 21 (7) ◽  
pp. 1396-1405 ◽  
Author(s):  
Liane Young ◽  
Rebecca Saxe

Human moral judgment depends critically on “theory of mind,” the capacity to represent the mental states of agents. Recent studies suggest that the right TPJ (RTPJ) and, to lesser extent, the left TPJ (LTPJ), the precuneus (PC), and the medial pFC (MPFC) are robustly recruited when participants read explicit statements of an agent's beliefs and then judge the moral status of the agent's action. Real-world interactions, by contrast, often require social partners to infer each other's mental states. The current study uses fMRI to probe the role of these brain regions in supporting spontaneous mental state inference in the service of moral judgment. Participants read descriptions of a protagonist's action and then either (i) “moral” facts about the action's effect on another person or (ii) “nonmoral” facts about the situation. The RTPJ, PC, and MPFC were recruited selectively for moral over nonmoral facts, suggesting that processing moral stimuli elicits spontaneous mental state inference. In a second experiment, participants read the same scenarios, but explicit statements of belief preceded the facts: Protagonists believed their actions would cause harm or not. The response in the RTPJ, PC, and LTPJ was again higher for moral facts but also distinguished between neutral and negative outcomes. Together, the results illuminate two aspects of theory of mind in moral judgment: (1) spontaneous belief inference and (2) stimulus-driven belief integration.


2020 ◽  
Author(s):  
Mark Allen Thornton ◽  
Milena Rmus ◽  
Diana Tamir

People’s thoughts and feelings ebb and flow in predictable ways: surprise arises quickly, anticipation ramps up slowly, regret follows anger, love begets happiness, and so forth. Predicting these transitions between mental states can help people successfully navigate the social world. We hypothesize that the goal of predicting state dynamics shapes people’ mental state concepts. Across seven studies, when people observed more frequent transitions between a pair of novel mental states, they judged those states to be more conceptually similar to each other. In an eighth study, an artificial neural network trained to predict real human mental state dynamics spontaneously learned the same conceptual dimensions that people use to understand these states: the 3d Mind Model. Together these results suggest that mental state dynamics explain the origins of mental state concepts.


2018 ◽  
Author(s):  
Mark Allen Thornton ◽  
Miriam E. Weaverdyck ◽  
Judith Mildner ◽  
Diana Tamir

One can never know the internal workings of another person – one can only infer others’ mental states based on external cues. In contrast, each person has direct access to the contents of their own mind. Here we test the hypothesis that this privileged access shapes the way people represent internal mental experiences, such that they represent their own mental states more distinctly than the states of others. Across four studies, participants considered their own and others’ mental states; analyses measured the distinctiveness of mental state representations. Two neuroimaging studies used representational similarity analyses to demonstrate that the social brain manifests more distinct activity patterns when thinking about one’s own states versus others’. Two behavioral studies support these findings. Further, they demonstrate that people differentiate between states less as social distance increases. Together these results suggest that we represent our own mind with greater granularity than the minds of others.


Sign in / Sign up

Export Citation Format

Share Document