scholarly journals Exploring the Effects of Visual Perspective on the ERP Components of Empathy for Pain

2019 ◽  
Author(s):  
Carl Michael Orquiola Galang ◽  
Sukhvinder S. Obhi ◽  
Michael Jenkins

Previous neurophysiological research suggests that there are event-related potential (ERP) components are associated with empathy for pain: early affective component (N2) and two late cognitive components (P3/LPP). The current study investigated whether and how the visual perspective from which a painful event is observed affects these ERP components. Participants viewed images of hands in pain vs. not in pain from a first-person or third-person perspective. We found that visual perspective influences both the early and late components. In the early component (N2), there was a larger mean amplitude during observation of pain vs no-pain exclusively when images were shown from a first-person perspective. We suggest that this effect may be driven by misattributing the on-screen hand to oneself. For the late component (P3), we found a larger effect of pain on mean amplitudes in response to third-person relative to first-person images. We speculate that the P3 may reflect a later process that enables effective recognition of others’ pain in the absence of misattribution. We discuss our results in relation to self- vs other-related processing by questioning whether these ERP components are truly indexing empathy (an other-directed process) or a simple misattribution of another’s pain as one’s own (a self-directed process).

2021 ◽  
Author(s):  
Sima Ebrahimian ◽  
Bradley Mattan ◽  
Mazaher Rezaei

Abstract Background: Lack of empathy is one of the main characteristics of narcissists. However, it is not clear whether there is a similar deficit in other facets of mentalizing, such as perspective-taking.Method: In this study, we measured the taking visual perspectives ascribed to different targets (e.g., first-person self, third-person self-avatar, and third-person stranger avatar). Our study focused on separate groups of individuals with high and low self-reported narcissistic traits. Results: Participants reporting high Narcissism scores showed higher accuracy in a third-person perspective-taking task than did their low-Narcissism counterparts. However, when the first-person perspective was incongruent with the third-person (first person vs. self- tagged avatar), the accuracy of their responses decreased.Conclusions: The discrepancy between the two types of perspective taking of people with high narcissism can probably mean that the narcissistic people perfectly identify / empathize with one object (person, avatar, character, etc.) and therefore their perspective-taking is disrupted when they need to identify with more than one object that represent their self-attributed perspectives.


2021 ◽  
Author(s):  
Sahba Besharati ◽  
Paul Jenkinson ◽  
Michael Kopelman ◽  
Mark Solms ◽  
Valentina Moro ◽  
...  

In recent decades, the research traditions of (first-person) embodied cognition and of (third-person) social cognition have approached the study of self-awareness with relative independence. However, neurological disorders of self-awareness offer a unifying perspective to empirically investigate the contribution of embodiment and social cognition to self-awareness. This study focused on a neuropsychological disorder of bodily self-awareness following right-hemisphere damage, namely anosognosia for hemiplegia (AHP). A previous neuropsychological study has shown AHP patients, relative to neurological controls, to have a specific deficit in third-person, allocentric inferences in a story-based, mentalisation task. However, no study has tested directly whether verbal awareness of motor deficits is influenced by either perspective-taking or centrism, and if these deficits in social cognition are correlated with damage to anatomical areas previously linked to mentalising, including the supramarginal and superior temporal gyri and related limbic white matter connections. Accordingly, two novel experiments were conducted with right-hemisphere stroke patients with (n = 17) and without AHP (n = 17) that targeted either their own (egocentric, experiment 1) or another stooge patient’s (experiment 2) motor abilities from a first-or-third person (allocentric in Experiment 2) perspective. In both experiments, neurological controls showed no significant difference between perspectives, suggesting that perspective-taking deficits are not a general consequence of right-hemisphere damage. More specifically, experiment 1 found AHP patients were more aware of their own motor paralysis when asked from a third compared to a first-person perspective, using both group level and individual level analysis. In experiment 2, AHP patients were less accurate than controls in making allocentric, third-person perspective judgements about the stooge patient, but with only a trend towards significance and with no within-group, difference between perspectives. Deficits in egocentric and allocentric third-person perspective taking were associated with lesions in the middle frontal gyrus, superior temporal and supramarginal gyri, with white matter disconnections more predominate in deficits in allocentricity. This study confirms previous clinical and empirical investigations on the selectivity of first-person motor awareness deficits in anosognosia for hemiplegia and experimentally demonstrates for the first time that verbal egocentric 3PP-taking can positively influence 1PP body awareness.


2014 ◽  
Vol 7 (1) ◽  
pp. 3-29 ◽  
Author(s):  
Jordan Zlatev

Abstract Mimetic schemas, unlike the popular cognitive linguistic notion of image schemas, have been characterized in earlier work as explicitly representational, bodily structures arising from imitation of culture-specific practical actions (Zlatev 2005, 2007a, 2007b). We performed an analysis of the gestures of three Swedish and three Thai children at the age of 18, 22 and 26 months in episodes of natural interaction with caregivers and siblings in order to analyze the hypothesis that iconic gestures emerge as mimetic schemas. In accordance with this hypothesis, we predicted that the children's first iconic gestures would be (a) intermediately specific, (b) culture-typical, (c) falling in a set of recurrent types, (d) predominantly enacted from a first-person perspective (1pp) rather than performed from a third-person perspective (3pp), with (e) 3pp gestures being more dependent on direct imitation than 1pp gestures and (f) more often co-occurring with speech. All specific predictions but the last were confirmed, and differences were found between the children's iconic gestures on the one side and their deictic and emblematic gestures on the other. Thus, the study both confirms earlier conjectures that mimetic schemas “ground” both gesture and speech and implies the need to qualify these proposals, limiting the link between mimetic schemas and gestures to the iconic category.


2020 ◽  
Vol 10 (1) ◽  
pp. 55
Author(s):  
Alexey Tumialis ◽  
Alexey Smirnov ◽  
Kirill Fadeev ◽  
Tatiana Alikovskaia ◽  
Pavel Khoroshikh ◽  
...  

The perspective of perceiving one’s action affects its speed and accuracy. In the present study, we investigated the change in accuracy and kinematics when subjects throw darts from the first-person perspective and the third-person perspective with varying angles of view. To model the third-person perspective, subjects were looking at themselves as well as the scene through the virtual reality head-mounted display (VR HMD). The scene was supplied by a video feed from the camera located to the up and 0, 20 and 40 degrees to the right behind the subjects. The 28 subjects wore a motion capture suit to register their right hand displacement, velocity and acceleration, as well as torso rotation during the dart throws. The results indicated that mean accuracy shifted in opposite direction with the changes of camera location in vertical axis and in congruent direction in horizontal axis. Kinematic data revealed a smaller angle of torso rotation to the left in all third-person perspective conditions before and during the throw. The amplitude, speed and acceleration in third-person condition were lower compared to the first-person view condition, before the peak velocity of the hand in the direction toward the target and after the peak velocity in lowering the hand. Moreover, the hand movement angle was smaller in the third-person perspective conditions with 20 and 40 angle of view, compared with the first-person perspective condition just preceding the time of peak velocity, and the difference between conditions predicted the changes in mean accuracy of the throws. Thus, the results of this study revealed that subject’s localization contributed to the transformation of the motor program.


10.2196/18888 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e18888
Author(s):  
Susanne M van der Veen ◽  
Alexander Stamenkovic ◽  
Megan E Applegate ◽  
Samuel T Leitkam ◽  
Christopher R France ◽  
...  

Background Visual representation of oneself is likely to affect movement patterns. Prior work in virtual dodgeball showed greater excursion of the ankles, knees, hips, spine, and shoulder occurs when presented in the first-person perspective compared to the third-person perspective. However, the mode of presentation differed between the two conditions such that a head-mounted display was used to present the avatar in the first-person perspective, but a 3D television (3DTV) display was used to present the avatar in the third-person. Thus, it is unknown whether changes in joint excursions are driven by the visual display (head-mounted display versus 3DTV) or avatar perspective during virtual gameplay. Objective This study aimed to determine the influence of avatar perspective on joint excursion in healthy individuals playing virtual dodgeball using a head-mounted display. Methods Participants (n=29, 15 male, 14 female) performed full-body movements to intercept launched virtual targets presented in a game of virtual dodgeball using a head-mounted display. Two avatar perspectives were compared during each session of gameplay. A first-person perspective was created by placing the center of the displayed content at the bridge of the participant’s nose, while a third-person perspective was created by placing the camera view at the participant’s eye level but set 1 m behind the participant avatar. During gameplay, virtual dodgeballs were launched at a consistent velocity of 30 m/s to one of nine locations determined by a combination of three different intended impact heights and three different directions (left, center, or right) based on subject anthropometrics. Joint kinematics and angular excursions of the ankles, knees, hips, lumbar spine, elbows, and shoulders were assessed. Results The change in joint excursions from initial posture to the interception of the virtual dodgeball were averaged across trials. Separate repeated-measures ANOVAs revealed greater excursions of the ankle (P=.010), knee (P=.001), hip (P=.0014), spine (P=.001), and shoulder (P=.001) joints while playing virtual dodgeball in the first versus third-person perspective. Aligning with the expectations, there was a significant effect of impact height on joint excursions. Conclusions As clinicians develop treatment strategies in virtual reality to shape motion in orthopedic populations, it is important to be aware that changes in avatar perspective can significantly influence motor behavior. These data are important for the development of virtual reality assessment and treatment tools that are becoming increasingly practical for home and clinic-based rehabilitation.


2018 ◽  
Author(s):  
A.W. de Borst ◽  
M.V. Sanchez-Vives ◽  
M. Slater ◽  
B. de Gelder

AbstractPeripersonal space is the area directly surrounding the body, which supports object manipulation and social interaction, but is also critical for threat detection. In the monkey, ventral premotor and intraparietal cortex support initiation of defensive behavior. However, the brain network that underlies threat detection in human peripersonal space still awaits investigation. We combined fMRI measurements with a preceding virtual reality training from either first or third person perspective to manipulate whether approaching human threat was perceived as directed to oneself or another. We found that first person perspective increased body ownership and identification with the virtual victim. When threat was perceived as directed towards oneself, synchronization of brain activity in the human peripersonal brain network was enhanced and connectivity increased from premotor and intraparietal cortex towards superior parietal lobe. When this threat was nearby, synchronization also occurred in emotion-processing regions. Priming with third person perspective reduced synchronization of brain activity in the peripersonal space network and increased top-down modulation of visual areas. In conclusion, our results showed that after first person perspective training peripersonal space is remapped to the virtual victim, thereby causing the fronto-parietal network to predict intrusive actions towards the body and emotion-processing regions to signal nearby threat.


2015 ◽  
Vol 63 (4) ◽  
Author(s):  
Richard Moran

AbstractIn philosophy it is widely recognized that a person’s first-person perspective on his own thought and action is importantly different from the third-person perspective we may have on the thought and actions of other people. In daily life it is natural to ask someone what he is doing or what he thinks about something, on the assumption that he knows what he is doing or what he is thinking. Some philosophers, however, argue that it is impossible to speak of knowledge in this context because the idea of knowledge requires a kind of distance between subject and object, a distance that is not present in the first-person context. I argue that this denial of self-knowledge is a paradoxical conclusion that we can resist, while retaining what is distinctive about the first-person.


Sign in / Sign up

Export Citation Format

Share Document