scholarly journals Contributions of Body-Orientation to Mental Ball Dropping Task During Out-of-Body Experiences

2022 ◽  
Vol 15 ◽  
Author(s):  
Ege Tekgün ◽  
Burak Erdeniz

Out-of-body experiences (OBEs) provide fascinating insights into our understanding of bodily self-consciousness and the workings of the brain. Studies that examined individuals with brain lesions reported that OBEs are generally characterized by participants experiencing themselves outside their physical body (i.e., disembodied feeling) (Blanke and Arzy, 2005). Based on such a characterization, it has been shown that it is possible to create virtual OBEs in immersive virtual environments (Ehrsson, 2007; Ionta et al., 2011b; Bourdin et al., 2017). However, the extent to which body-orientation influences virtual OBEs is not well-understood. Thus, in the present study, 30 participants (within group design) experienced a full-body ownership illusion (synchronous visuo-tactile stimulation only) induced with a gender-matched full-body virtual avatar seen from the first-person perspective (1PP). At the beginning of the experiment, participants performed a mental ball dropping (MBD) task, seen from the location of their virtual avatar, to provide a baseline measurement. After this, a full-body ownership illusion (embodiment phase) was induced in all participants. This was followed by the virtual OBE illusion phase of the experiment (disembodiment phase) in which the first-person viewpoint was switched to a third-person perspective (3PP), and participants' disembodied viewpoint was gradually raised to 14 m above the virtual avatar, from which altitude they repeated the MBD task. During the experiment, this procedure was conducted twice, and the participants were allocated first to the supine or the standing body position at random. Results of the MBD task showed that the participants experienced increased MBD durations during the supine condition compared to the standing condition. Furthermore, although the findings from the subjective reports confirmed the previous findings of virtual OBEs, no significant difference between the two postures was found for body ownership. Taken together, the findings of the current study make further contributions to our understanding of both the vestibular system and time perception during OBEs.

2021 ◽  
Author(s):  
Kazuki Yamamoto ◽  
Takashi Nakao

Sense of body ownership, i.e., the feeling that “my body belongs to me,” has been examined by both the rubber hand illusion (RHI) and full body illusion (FBI). In a study that examined the relationship between RHI and depersonalization, a symptom in which people experience a lower sense of body ownership, the degree of illusion was higher in people with a high depersonalization tendency. However, other reports have suggested that people with depersonalization disorder have difficulty feeling the sense of body ownership. Examination of depersonalization suggests that the negative body recognition in people with depersonalization may make them less likely to feel a sense of body ownership, but this has not yet been examined. In this study, by manipulating top-down recognition (e.g., instructing participants to recognize a fake body as theirs), we clarified the cause of the reduced sense of body ownership in people with a high depersonalization tendency. The FBI procedure was conducted in a virtual reality environment using an avatar as a fake body. The avatar was presented from a third-person perspective, and visual-tactile stimuli were presented to create an illusion. To examine the degree of illusion, we measured the skin conductance responses to the fear stimulus presented after the visual-tactile stimuli presentation. The degree of depersonalization was measured using the Japanese version of the Cambridge Depersonalization Scale. To manipulate the top-down recognition to the avatar, we provided self-association instructions before the presentation of the visual-tactile stimuli. The results showed that participants with a high depersonalization tendency had a lower degree of illusion (rho = -.589, p < .01) in the self-association condition, and a higher one (rho = .552, p < .01) in the non-association instruction condition. This indicates that although people with a high depersonalization tendency are more likely to feel a sense of body ownership through the integration of visual-tactile stimuli, top-down recognition of the body as one’s own leads to a decrease in the sense of body ownership.


2016 ◽  
Vol 28 (11) ◽  
pp. 1760-1771 ◽  
Author(s):  
Giulia Bucchioni ◽  
Carlotta Fossataro ◽  
Andrea Cavallo ◽  
Harold Mouras ◽  
Marco Neppi-Modona ◽  
...  

Recent studies show that motor responses similar to those present in one's own pain (freezing effect) occur as a result of observation of pain in others. This finding has been interpreted as the physiological basis of empathy. Alternatively, it can represent the physiological counterpart of an embodiment phenomenon related to the sense of body ownership. We compared the empathy and the ownership hypotheses by manipulating the perspective of the observed hand model receiving pain so that it could be a first-person perspective, the one in which embodiment occurs, or a third-person perspective, the one in which we usually perceive the others. Motor-evoked potentials (MEPs) by TMS over M1 were recorded from first dorsal interosseous muscle, whereas participants observed video clips showing (a) a needle penetrating or (b) a Q-tip touching a hand model, presented either in first-person or in third-person perspective. We found that a pain-specific inhibition of MEP amplitude (a significantly greater MEP reduction in the “pain” compared with the “touch” conditions) only pertains to the first-person perspective, and it is related to the strength of the self-reported embodiment. We interpreted this corticospinal modulation according to an “affective” conception of body ownership, suggesting that the body I feel as my own is the body I care more about.


2021 ◽  
Vol 2 ◽  
Author(s):  
Yusuke Matsuda ◽  
Junya Nakamura ◽  
Tomohiro Amemiya ◽  
Yasushi Ikei ◽  
Michiteru Kitazaki

Walking is a fundamental physical activity in humans. Various virtual walking systems have been developed using treadmill or leg-support devices. Using optic flow, foot vibrations simulating footsteps, and a walking avatar, we propose a virtual walking system that does not require limb action for seated users. We aim to investigate whether a full-body or hands-and-feet-only walking avatar with either the first-person (experiment 1) or third-person (experiment 2) perspective can convey the sensation of walking in a virtual environment through optic flows and foot vibrations. The viewing direction of the virtual camera and the head of the full-body avatar were linked to the actual user's head motion. We discovered that the full-body avatar with the first-person perspective enhanced the sensations of walking, leg action, and telepresence, either through synchronous or asynchronous foot vibrations. Although the hands-and-feet-only avatar with the first-person perspective enhanced the walking sensation and telepresence, compared with the no-avatar condition, its effect was less prominent than that of the full-body avatar. However, the full-body avatar with the third-person perspective did not enhance the sensations of walking and leg action; rather, it impaired the sensations of self-motion and telepresence. Synchronous or rhythmic foot vibrations enhanced the sensations of self-motion, waking, leg action, and telepresence, irrespective of the avatar condition. These results suggest that the full-body or hands-and-feet avatar is effective for creating virtual walking experiences from the first-person perspective, but not the third-person perspective, and that the foot vibrations simulating footsteps are effective, regardless of the avatar condition.


2020 ◽  
Vol 7 (12) ◽  
pp. 201911
Author(s):  
Arvid Guterstam ◽  
Dennis E. O. Larsson ◽  
Joanna Szczotka ◽  
H. Henrik Ehrsson

Previous research has shown that it is possible to use multisensory stimulation to induce the perceptual illusion of owning supernumerary limbs, such as two right arms. However, it remains unclear whether the coherent feeling of owning a full-body may be duplicated in the same manner and whether such a dual full-body illusion could be used to split the unitary sense of self-location into two. Here, we examined whether healthy human participants can experience simultaneous ownership of two full-bodies, located either close in parallel or in two separate spatial locations. A previously described full-body illusion, based on visuo-tactile stimulation of an artificial body viewed from the first-person perspective (1PP) via head-mounted displays, was adapted to a dual-body setting and quantified in five experiments using questionnaires, a behavioural self-location task and threat-evoked skin conductance responses. The results of experiments 1–3 showed that synchronous visuo-tactile stimulation of two bodies viewed from the 1PP lying in parallel next to each other induced a significant illusion of dual full-body ownership. In experiment 4, we failed to find support for our working hypothesis that splitting the visual scene into two, so that each of the two illusory bodies was placed in distinct spatial environments, would lead to dual self-location. In a final exploratory experiment (no. 5), we found preliminary support for an illusion of dual self-location and dual body ownership by using dynamic changes between the 1PPs of two artificial bodies and/or a common third-person perspective in the ceiling of the testing room. These findings suggest that healthy people, under certain conditions of multisensory perceptual ambiguity, may experience dual body ownership and dual self-location. These findings suggest that the coherent sense of the bodily self located at a single place in space is the result of an active and dynamic perceptual integration process.


2021 ◽  
Vol 5 (1) ◽  
pp. 13-20
Author(s):  
I Gde Agung Sri Sidhimantra ◽  
Darlis Herumurti

Advances in technology makes it easier to gain access to the virtual world. This has led to more and more application and games being targeted towards the virtual world. But with the growing popularity of the virtual world, cybersickness has grown in popularity as well. This study aims to evaluate the factors affecting cybersickness in the Virtual Reality (VR) environment. There are few factors causing the effect of cybersickness in VR like duration, field of view, speed, habituation, and susceptibility of said user. Those factors affect differently in first person perspective(1pp) and third person perspective(3pp). To measure the cybersickness, a Virtual Reality Questionnaire (VRSQ) measurement index is utilized. The experiment was conducted with the following settings. The participants consisted of 20 males and 4 females who never used VR before. They performed task using short games. It consisted in total of 4 tasks (2 types of game (action and adventure) x 2 perspective (1pp and 3pp) = 4 tasks). The Latin Square design was used to minimize the effect of order. Then, a questionnaire was conducted after each treatment. Paired Dependent T-Tests was performed to check if there are differences in oculomotor, disorientation and VRSQ total score. There was a significant difference in 1pp and 3pp in both games. It is recommended to use third person perspective to reduce the cybersickness in VR environment.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Caleb Liang ◽  
Wen-Hsiang Lin ◽  
Tai-Yuan Chang ◽  
Chi-Hong Chen ◽  
Chen-Wei Wu ◽  
...  

AbstractBody ownership concerns what it is like to feel a body part or a full body as mine, and has become a prominent area of study. We propose that there is a closely related type of bodily self-consciousness largely neglected by researchers—experiential ownership. It refers to the sense that I am the one who is having a conscious experience. Are body ownership and experiential ownership actually the same phenomenon or are they genuinely different? In our experiments, the participant watched a rubber hand or someone else’s body from the first-person perspective and was touched either synchronously or asynchronously. The main findings: (1) The sense of body ownership was hindered in the asynchronous conditions of both the body-part and the full-body experiments. However, a strong sense of experiential ownership was observed in those conditions. (2) We found the opposite when the participants’ responses were measured after tactile stimulations had ceased for 5 s. In the synchronous conditions of another set of body-part and full-body experiments, only experiential ownership was blocked but not body ownership. These results demonstrate for the first time the double dissociation between body ownership and experiential ownership. Experiential ownership is indeed a distinct type of bodily self-consciousness.


2021 ◽  
Author(s):  
Sahba Besharati ◽  
Paul Jenkinson ◽  
Michael Kopelman ◽  
Mark Solms ◽  
Valentina Moro ◽  
...  

In recent decades, the research traditions of (first-person) embodied cognition and of (third-person) social cognition have approached the study of self-awareness with relative independence. However, neurological disorders of self-awareness offer a unifying perspective to empirically investigate the contribution of embodiment and social cognition to self-awareness. This study focused on a neuropsychological disorder of bodily self-awareness following right-hemisphere damage, namely anosognosia for hemiplegia (AHP). A previous neuropsychological study has shown AHP patients, relative to neurological controls, to have a specific deficit in third-person, allocentric inferences in a story-based, mentalisation task. However, no study has tested directly whether verbal awareness of motor deficits is influenced by either perspective-taking or centrism, and if these deficits in social cognition are correlated with damage to anatomical areas previously linked to mentalising, including the supramarginal and superior temporal gyri and related limbic white matter connections. Accordingly, two novel experiments were conducted with right-hemisphere stroke patients with (n = 17) and without AHP (n = 17) that targeted either their own (egocentric, experiment 1) or another stooge patient’s (experiment 2) motor abilities from a first-or-third person (allocentric in Experiment 2) perspective. In both experiments, neurological controls showed no significant difference between perspectives, suggesting that perspective-taking deficits are not a general consequence of right-hemisphere damage. More specifically, experiment 1 found AHP patients were more aware of their own motor paralysis when asked from a third compared to a first-person perspective, using both group level and individual level analysis. In experiment 2, AHP patients were less accurate than controls in making allocentric, third-person perspective judgements about the stooge patient, but with only a trend towards significance and with no within-group, difference between perspectives. Deficits in egocentric and allocentric third-person perspective taking were associated with lesions in the middle frontal gyrus, superior temporal and supramarginal gyri, with white matter disconnections more predominate in deficits in allocentricity. This study confirms previous clinical and empirical investigations on the selectivity of first-person motor awareness deficits in anosognosia for hemiplegia and experimentally demonstrates for the first time that verbal egocentric 3PP-taking can positively influence 1PP body awareness.


2019 ◽  
Author(s):  
Carl Michael Orquiola Galang ◽  
Sukhvinder S. Obhi ◽  
Michael Jenkins

Previous neurophysiological research suggests that there are event-related potential (ERP) components are associated with empathy for pain: early affective component (N2) and two late cognitive components (P3/LPP). The current study investigated whether and how the visual perspective from which a painful event is observed affects these ERP components. Participants viewed images of hands in pain vs. not in pain from a first-person or third-person perspective. We found that visual perspective influences both the early and late components. In the early component (N2), there was a larger mean amplitude during observation of pain vs no-pain exclusively when images were shown from a first-person perspective. We suggest that this effect may be driven by misattributing the on-screen hand to oneself. For the late component (P3), we found a larger effect of pain on mean amplitudes in response to third-person relative to first-person images. We speculate that the P3 may reflect a later process that enables effective recognition of others’ pain in the absence of misattribution. We discuss our results in relation to self- vs other-related processing by questioning whether these ERP components are truly indexing empathy (an other-directed process) or a simple misattribution of another’s pain as one’s own (a self-directed process).


2014 ◽  
Vol 7 (1) ◽  
pp. 3-29 ◽  
Author(s):  
Jordan Zlatev

Abstract Mimetic schemas, unlike the popular cognitive linguistic notion of image schemas, have been characterized in earlier work as explicitly representational, bodily structures arising from imitation of culture-specific practical actions (Zlatev 2005, 2007a, 2007b). We performed an analysis of the gestures of three Swedish and three Thai children at the age of 18, 22 and 26 months in episodes of natural interaction with caregivers and siblings in order to analyze the hypothesis that iconic gestures emerge as mimetic schemas. In accordance with this hypothesis, we predicted that the children's first iconic gestures would be (a) intermediately specific, (b) culture-typical, (c) falling in a set of recurrent types, (d) predominantly enacted from a first-person perspective (1pp) rather than performed from a third-person perspective (3pp), with (e) 3pp gestures being more dependent on direct imitation than 1pp gestures and (f) more often co-occurring with speech. All specific predictions but the last were confirmed, and differences were found between the children's iconic gestures on the one side and their deictic and emblematic gestures on the other. Thus, the study both confirms earlier conjectures that mimetic schemas “ground” both gesture and speech and implies the need to qualify these proposals, limiting the link between mimetic schemas and gestures to the iconic category.


Sign in / Sign up

Export Citation Format

Share Document