visual perspective taking
Recently Published Documents


TOTAL DOCUMENTS

105
(FIVE YEARS 35)

H-INDEX

20
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Kenji Ogawa ◽  
Yuiko Matsuyama

Visual perspective taking (VPT), particularly level 2 VPT (VPT2), which allows an individual to understand that the same object can be seen differently by others, is related to the theory of mind (ToM), because both functions require a decoupled representation from oneself. Although previous neuroimaging studies have shown that VPT and ToM activate the temporo-parietal junction (TPJ), it is unclear whether common neural substrates are involved in VPT and ToM. To clarify this point, the present study directly compared the TPJ activation patterns of individual participants performing VPT2 and ToM tasks using functional magnetic resonance imaging and within-subjects design. VPT2-induced activations were compared with activations observed during a mental rotation task as a control task, whereas ToM-related activities were identified with a standard ToM localizer using false-belief stories. A whole-brain analysis revealed that VPT2 and ToM activated overlapping areas in the posterior part of the TPJ. By comparing the activations induced by VPT2 and ToM in individual participants, we found that the peak voxels induced by ToM were located significantly more anteriorly and dorsally within the bilateral TPJ than those measured during the VPT2 task. We further confirmed that these activity areas were spatially distinct from the nearby extrastriate body area (EBA), visual motion area (MT+), and the posterior superior temporal sulcus (pSTS) using independent localizer scans. Our findings revealed that VPT2 and ToM have distinct representations, albeit partially overlapping, indicating the functional heterogeneity of social cognition within the TPJ.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261063
Author(s):  
Sachiyo Ueda ◽  
Kazuya Nagamachi ◽  
Junya Nakamura ◽  
Maki Sugimoto ◽  
Masahiko Inami ◽  
...  

Visual perspective taking is inferring how the world looks to another person. To clarify this process, we investigated whether employing a humanoid avatar as the viewpoint would facilitate an imagined perspective shift in a virtual environment, and which factor of the avatar is effective for the facilitation effect. We used a task that involved reporting how an object looks by a simple direction judgment, either from the avatar’s position or from the position of an empty chair. We found that the humanoid avatar’s presence improved task performance. Furthermore, the avatar’s facilitation effect was observed only when the avatar was facing the visual stimulus to be judged; performance was worse when it faced backwards than when there was only an empty chair facing forwards. This suggests that the avatar does not simply attract spatial attention, but the posture of the avatar is crucial for the facilitation effect. In addition, when the directions of the head and the torso were opposite (i.e., an impossible posture), the avatar’s facilitation effect disappeared. Thus, visual perspective taking might not be facilitated by the avatar when its posture is biomechanically impossible because we cannot embody it. Finally, even when the avatar’s head of the possible posture was covered with a bucket, the facilitation effect was found with the forward-facing avatar rather than the backward-facing avatar. That is, the head/gaze direction cue, or presumably the belief that the visual stimulus to be judged can be seen by the avatar, was not required. These results suggest that explicit perspective taking is facilitated by embodiment towards humanoid avatars.


2021 ◽  
pp. 174702182110544
Author(s):  
Paola del Sette ◽  
Markus Bindemann ◽  
Heather J Ferguson

Studies of visual perspective-taking have shown that adults can rapidly and accurately compute their own and other peoples’ viewpoints, but they experience difficulties when the two perspectives are inconsistent. We tested whether these egocentric (i.e. interference from one’s own perspective) and altercentric biases (i.e. interference from another person’s perspective) persist in ecologically-valid complex environments. Participants (N=150) completed a dot-probe visual perspective-taking task, in which they verified the number of discs in natural scenes containing real people, first only according to their own perspective and then judging both their own and another person’s perspective. Results showed that the other person’s perspective did not disrupt self perspective-taking judgements when the other perspective was not explicitly prompted. In contrast, egocentric and altercentric biases were found when participants were prompted to switch between self and other perspectives. These findings suggest that altercentric visual perspective-taking can be activated spontaneously in complex real-world contexts, but is subject to both top-down and bottom-up influences, including explicit prompts or salient visual stimuli.


2021 ◽  
Author(s):  
Cherie Strikwerda-Brown ◽  
Rebekah Ahmed ◽  
Olivier Piguet ◽  
Muireann Irish

The behavioural variant of frontotemporal dementia (bvFTD) is characterised by pronounced alterations in social functioning, including the understanding of others’ thoughts and feelings via theory of mind. The emergence of such impairments in other social disorders such as autism and schizophrenia is suggested to reflect an inability to imagine the other person’s visual perspective of the world. To our knowledge, this hypothesis is yet to be explored in bvFTD. Here, we sought to examine the capacity for different forms of perspective taking, including visual perspective taking and theory of mind in bvFTD, and to establish their inter-relationships and underlying neural correlates. Fifteen bvFTD patients and 15 healthy Controls completed a comprehensive battery of perspective taking measures, comprising Level 1 (‘what’) and Level 2 (‘how’) visual perspective taking tasks, a cartoon task capturing theory of mind, and a questionnaire assessing perspective taking in daily life. Compared with Controls, bvFTD patients displayed significant impairments across all perspective taking measures. These perspective taking impairments, however, were not correlated with one another in bvFTD. Moreover, controlling for visual perspective taking performance did not ameliorate the deficits in theory of mind or real-world perspective taking. Region-of-interest voxel-based morphometry analyses suggested distinct neural correlates for visual perspective taking (inferior frontal gyrus) versus theory of mind (medial prefrontal cortex, precuneus), which appeared to partially overlap with those implicated in real-world perspective taking (inferior frontal gyrus, precuneus, temporoparietal junction). Despite pervasive impairments in all aspects of perspective taking in bvFTD, our findings suggest that these deficits may reflect distinct underlying processes. Future studies manipulating discrete aspects of the tasks will help to clarify the neurocognitive mechanisms of, and relationships between different forms of perspective taking in bvFTD, along with their real-world implications.


2021 ◽  
Author(s):  
Paula Rubio-Fernandez ◽  
Madeleine Long ◽  
Vishakha Shukla ◽  
Vrinda Bhatia ◽  
Pawan Sinha

In the Dot task, children and adults involuntarily compute an avatar’s visual perspective, which has been interpreted as automatic Theory of Mind. We conducted three experiments in India, testing newly sighted children (N=5; all girls), neurotypical children (ages 5-10; N=90; 38 girls) and adults (N=30; 18 women) in a highly simplified version of the Dot task. No evidence of automatic perspective-taking was observed, although all groups revealed perspective-taking costs. A newly sighted child and the youngest children in our sample also showed an egocentric bias, which disappeared by age 10. Responding to recent work on what Theory of Mind tasks actually measure, we conclude that the standard Dot task relies so heavily on Executive Control that the alleged evidence of automatic Theory of Mind might simply reveal perspective switching costs.


NeuroImage ◽  
2021 ◽  
pp. 118462
Author(s):  
Yuan-Wei Yao ◽  
Vivien Chopurian ◽  
Lei Zhang ◽  
Claus Lamm ◽  
Hauke R. Heekeren

Author(s):  
Jing Zhai ◽  
Jiushu Xie ◽  
Jiahan Chen ◽  
Yujie Huang ◽  
Yuchao Ma ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document