scholarly journals The Redundant Signals Effect and the Full Body Illusion: not Multisensory, but Unisensory Tactile Stimuli Are Affected by the Illusion

2021 ◽  
pp. 1-33
Author(s):  
Lieke M. J. Swinkels ◽  
Harm Veling ◽  
Hein T. van Schie

Abstract During a full body illusion (FBI), participants experience a change in self-location towards a body that they see in front of them from a third-person perspective and experience touch to originate from this body. Multisensory integration is thought to underlie this illusion. In the present study we tested the redundant signals effect (RSE) as a new objective measure of the illusion that was designed to directly tap into the multisensory integration underlying the illusion. The illusion was induced by an experimenter who stroked and tapped the participant’s shoulder and underarm, while participants perceived the touch on the virtual body in front of them via a head-mounted display. Participants performed a speeded detection task, responding to visual stimuli on the virtual body, to tactile stimuli on the real body and to combined (multisensory) visual and tactile stimuli. Analysis of the RSE with a race model inequality test indicated that multisensory integration took place in both the synchronous and the asynchronous condition. This surprising finding suggests that simultaneous bodily stimuli from different (visual and tactile) modalities will be transiently integrated into a multisensory representation even when no illusion is induced. Furthermore, this finding suggests that the RSE is not a suitable objective measure of body illusions. Interestingly however, responses to the unisensory tactile stimuli in the speeded detection task were found to be slower and had a larger variance in the asynchronous condition than in the synchronous condition. The implications of this finding for the literature on body representations are discussed.

2021 ◽  
Author(s):  
Kazuki Yamamoto ◽  
Takashi Nakao

Sense of body ownership, i.e., the feeling that “my body belongs to me,” has been examined by both the rubber hand illusion (RHI) and full body illusion (FBI). In a study that examined the relationship between RHI and depersonalization, a symptom in which people experience a lower sense of body ownership, the degree of illusion was higher in people with a high depersonalization tendency. However, other reports have suggested that people with depersonalization disorder have difficulty feeling the sense of body ownership. Examination of depersonalization suggests that the negative body recognition in people with depersonalization may make them less likely to feel a sense of body ownership, but this has not yet been examined. In this study, by manipulating top-down recognition (e.g., instructing participants to recognize a fake body as theirs), we clarified the cause of the reduced sense of body ownership in people with a high depersonalization tendency. The FBI procedure was conducted in a virtual reality environment using an avatar as a fake body. The avatar was presented from a third-person perspective, and visual-tactile stimuli were presented to create an illusion. To examine the degree of illusion, we measured the skin conductance responses to the fear stimulus presented after the visual-tactile stimuli presentation. The degree of depersonalization was measured using the Japanese version of the Cambridge Depersonalization Scale. To manipulate the top-down recognition to the avatar, we provided self-association instructions before the presentation of the visual-tactile stimuli. The results showed that participants with a high depersonalization tendency had a lower degree of illusion (rho = -.589, p < .01) in the self-association condition, and a higher one (rho = .552, p < .01) in the non-association instruction condition. This indicates that although people with a high depersonalization tendency are more likely to feel a sense of body ownership through the integration of visual-tactile stimuli, top-down recognition of the body as one’s own leads to a decrease in the sense of body ownership.


2021 ◽  
Vol 2 ◽  
Author(s):  
Collin Turbyne ◽  
Abe Goedhart ◽  
Pelle de Koning ◽  
Frederike Schirmbeck ◽  
Damiaan Denys

Background: Body image (BI) disturbances have been identified in both clinical and non-clinical populations. Virtual reality (VR) has recently been used as a tool for modulating BI disturbances through the use of eliciting a full body illusion (FBI). This meta-analysis is the first to collate evidence on the effectiveness of an FBI to reduce BI disturbances in both clinical and non-clinical populations.Methods: We performed a literature search in MEDLINE (PubMed), EMBASE, PsychINFO, and Web of Science with the keywords and synonyms for “virtual reality” and “body image” to identify published studies until September 2020. We included studies that (1) created an FBI with a modified body shape or size and (2) reported BI disturbance outcomes both before and directly after the FBI. FBI was defined as a head-mounted display (HMD)-based simulation of embodying a virtual body from an egocentric perspective in an immersive 3D computer-generated environment.Results: Of the 398 identified unique studies, 13 were included after reading full-texts. Four of these studies were eligible for a meta-analysis on BI distortion inducing a small virtual body FBI in healthy females. Significant post-intervention results were found for estimations of shoulder width, hip width, and abdomen width, with the largest reductions in size being the estimation of shoulder circumference (SMD = −1.3; 95% CI: −2.2 to −0.4; p = 0.004) and hip circumference (SMD = −1.0; 95% CI: −1.6 to −0.4; p = 0.004). Mixed results were found in non-aggregated studies from large virtual body FBIs in terms of both estimated body size and BI dissatisfaction and in small virtual body FBI in terms of BI dissatisfaction.Conclusions: The findings presented in this paper suggest that the participants' BIs were able to conform to both an increased as well as a reduced virtual body size. However, because of the paucity of research in this field, the extent of the clinical utility of FBIs still remains unclear. In light of these limitations, we provide implications for future research about the clinical utility of FBIs for modulating BI-related outcomes.


2020 ◽  
Vol 10 (1) ◽  
pp. 55
Author(s):  
Alexey Tumialis ◽  
Alexey Smirnov ◽  
Kirill Fadeev ◽  
Tatiana Alikovskaia ◽  
Pavel Khoroshikh ◽  
...  

The perspective of perceiving one’s action affects its speed and accuracy. In the present study, we investigated the change in accuracy and kinematics when subjects throw darts from the first-person perspective and the third-person perspective with varying angles of view. To model the third-person perspective, subjects were looking at themselves as well as the scene through the virtual reality head-mounted display (VR HMD). The scene was supplied by a video feed from the camera located to the up and 0, 20 and 40 degrees to the right behind the subjects. The 28 subjects wore a motion capture suit to register their right hand displacement, velocity and acceleration, as well as torso rotation during the dart throws. The results indicated that mean accuracy shifted in opposite direction with the changes of camera location in vertical axis and in congruent direction in horizontal axis. Kinematic data revealed a smaller angle of torso rotation to the left in all third-person perspective conditions before and during the throw. The amplitude, speed and acceleration in third-person condition were lower compared to the first-person view condition, before the peak velocity of the hand in the direction toward the target and after the peak velocity in lowering the hand. Moreover, the hand movement angle was smaller in the third-person perspective conditions with 20 and 40 angle of view, compared with the first-person perspective condition just preceding the time of peak velocity, and the difference between conditions predicted the changes in mean accuracy of the throws. Thus, the results of this study revealed that subject’s localization contributed to the transformation of the motor program.


10.2196/18888 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e18888
Author(s):  
Susanne M van der Veen ◽  
Alexander Stamenkovic ◽  
Megan E Applegate ◽  
Samuel T Leitkam ◽  
Christopher R France ◽  
...  

Background Visual representation of oneself is likely to affect movement patterns. Prior work in virtual dodgeball showed greater excursion of the ankles, knees, hips, spine, and shoulder occurs when presented in the first-person perspective compared to the third-person perspective. However, the mode of presentation differed between the two conditions such that a head-mounted display was used to present the avatar in the first-person perspective, but a 3D television (3DTV) display was used to present the avatar in the third-person. Thus, it is unknown whether changes in joint excursions are driven by the visual display (head-mounted display versus 3DTV) or avatar perspective during virtual gameplay. Objective This study aimed to determine the influence of avatar perspective on joint excursion in healthy individuals playing virtual dodgeball using a head-mounted display. Methods Participants (n=29, 15 male, 14 female) performed full-body movements to intercept launched virtual targets presented in a game of virtual dodgeball using a head-mounted display. Two avatar perspectives were compared during each session of gameplay. A first-person perspective was created by placing the center of the displayed content at the bridge of the participant’s nose, while a third-person perspective was created by placing the camera view at the participant’s eye level but set 1 m behind the participant avatar. During gameplay, virtual dodgeballs were launched at a consistent velocity of 30 m/s to one of nine locations determined by a combination of three different intended impact heights and three different directions (left, center, or right) based on subject anthropometrics. Joint kinematics and angular excursions of the ankles, knees, hips, lumbar spine, elbows, and shoulders were assessed. Results The change in joint excursions from initial posture to the interception of the virtual dodgeball were averaged across trials. Separate repeated-measures ANOVAs revealed greater excursions of the ankle (P=.010), knee (P=.001), hip (P=.0014), spine (P=.001), and shoulder (P=.001) joints while playing virtual dodgeball in the first versus third-person perspective. Aligning with the expectations, there was a significant effect of impact height on joint excursions. Conclusions As clinicians develop treatment strategies in virtual reality to shape motion in orthopedic populations, it is important to be aware that changes in avatar perspective can significantly influence motor behavior. These data are important for the development of virtual reality assessment and treatment tools that are becoming increasingly practical for home and clinic-based rehabilitation.


2021 ◽  
Author(s):  
Yuki Ueyama ◽  
Masanori Harada

Abstract The first-person perspective (1PP) and third-person perspective (3PP) have both been adopted in video games. The 1PP can induce a strong sense of immersion, and the 3PP allows players to perceive distances easily. Virtual reality (VR) technologies have also adopted both perspectives to facilitate skill acquisition. However, how 1PP and 3PP views affect motor skills in the real world, as opposed to in games and virtual environments, remains unclear. This study examined the effects of the 1PP and 3PP on real-world dart-throwing accuracy after head-mounted display (HMD)-based practice tasks involving either the 1PP or 3PP. The 1PP group showed poorer dart-throwing performance, whereas the 3PP task had no effect on performance. Furthermore, while the effect of the 1PP task persisted for some time, that of task 3PP disappeared immediately. Therefore, the effects of 1PP VR practice tasks on motor control transfer more readily to the real world than do those of 3PP tasks.


2021 ◽  
Vol 2 ◽  
Author(s):  
Yusuke Matsuda ◽  
Junya Nakamura ◽  
Tomohiro Amemiya ◽  
Yasushi Ikei ◽  
Michiteru Kitazaki

Walking is a fundamental physical activity in humans. Various virtual walking systems have been developed using treadmill or leg-support devices. Using optic flow, foot vibrations simulating footsteps, and a walking avatar, we propose a virtual walking system that does not require limb action for seated users. We aim to investigate whether a full-body or hands-and-feet-only walking avatar with either the first-person (experiment 1) or third-person (experiment 2) perspective can convey the sensation of walking in a virtual environment through optic flows and foot vibrations. The viewing direction of the virtual camera and the head of the full-body avatar were linked to the actual user's head motion. We discovered that the full-body avatar with the first-person perspective enhanced the sensations of walking, leg action, and telepresence, either through synchronous or asynchronous foot vibrations. Although the hands-and-feet-only avatar with the first-person perspective enhanced the walking sensation and telepresence, compared with the no-avatar condition, its effect was less prominent than that of the full-body avatar. However, the full-body avatar with the third-person perspective did not enhance the sensations of walking and leg action; rather, it impaired the sensations of self-motion and telepresence. Synchronous or rhythmic foot vibrations enhanced the sensations of self-motion, waking, leg action, and telepresence, irrespective of the avatar condition. These results suggest that the full-body or hands-and-feet avatar is effective for creating virtual walking experiences from the first-person perspective, but not the third-person perspective, and that the foot vibrations simulating footsteps are effective, regardless of the avatar condition.


2021 ◽  
Vol 2 ◽  
Author(s):  
Anna I. Bellido Rivas ◽  
Xavi Navarro ◽  
Domna Banakou ◽  
Ramon Oliva ◽  
Veronica Orvalho ◽  
...  

Virtual Reality can be used to embody people in different types of body—so that when they look towards themselves or in a mirror they will see a life-sized virtual body instead of their own, and that moves with their own movements. This will typically give rise to the illusion of body ownership over the virtual body. Previous research has focused on embodiment in humanoid bodies, albeit with various distortions such as an extra limb or asymmetry, or with a body of a different race or gender. Here we show that body ownership also occurs over a virtual body that looks like a cartoon rabbit, at the same level as embodiment as a human. Furthermore, we explore the impact of embodiment on performance as a public speaker in front of a small audience. Forty five participants were recruited who had public speaking anxiety. They were randomly partitioned into three groups of 15, embodied as a Human, as the Cartoon rabbit, or from third person perspective (3PP) with respect to the rabbit. In each condition they gave two talks to a small audience of the same type as their virtual body. Several days later, as a test condition, they returned to give a talk to an audience of human characters embodied as a human. Overall, anxiety reduced the most in the Human condition, the least in the Cartoon condition, and there was no change in the 3PP condition, taking into account existing levels of trait anxiety. We show that embodiment in a cartoon character leads to high levels of body ownership from the first person perspective and synchronous real and virtual body movements. We also show that the embodiment influences outcomes on the public speaking task.


2020 ◽  
Vol 7 (12) ◽  
pp. 201911
Author(s):  
Arvid Guterstam ◽  
Dennis E. O. Larsson ◽  
Joanna Szczotka ◽  
H. Henrik Ehrsson

Previous research has shown that it is possible to use multisensory stimulation to induce the perceptual illusion of owning supernumerary limbs, such as two right arms. However, it remains unclear whether the coherent feeling of owning a full-body may be duplicated in the same manner and whether such a dual full-body illusion could be used to split the unitary sense of self-location into two. Here, we examined whether healthy human participants can experience simultaneous ownership of two full-bodies, located either close in parallel or in two separate spatial locations. A previously described full-body illusion, based on visuo-tactile stimulation of an artificial body viewed from the first-person perspective (1PP) via head-mounted displays, was adapted to a dual-body setting and quantified in five experiments using questionnaires, a behavioural self-location task and threat-evoked skin conductance responses. The results of experiments 1–3 showed that synchronous visuo-tactile stimulation of two bodies viewed from the 1PP lying in parallel next to each other induced a significant illusion of dual full-body ownership. In experiment 4, we failed to find support for our working hypothesis that splitting the visual scene into two, so that each of the two illusory bodies was placed in distinct spatial environments, would lead to dual self-location. In a final exploratory experiment (no. 5), we found preliminary support for an illusion of dual self-location and dual body ownership by using dynamic changes between the 1PPs of two artificial bodies and/or a common third-person perspective in the ceiling of the testing room. These findings suggest that healthy people, under certain conditions of multisensory perceptual ambiguity, may experience dual body ownership and dual self-location. These findings suggest that the coherent sense of the bodily self located at a single place in space is the result of an active and dynamic perceptual integration process.


2021 ◽  
Author(s):  
Miriam Albat ◽  
Jasmin Hautmann ◽  
Christoph Kayser ◽  
Josefine Molinski ◽  
Soner Ülkü

AbstractFaster reaction times for the detection of multisensory compared to unisensory stimuli are considered a hallmark of multisensory integration. While this multisensory redundant signals effect (RSE) has been reproduced many times, it has also been repeatedly criticized as confounding multisensory integration and general task switching effects. When unisensory and multisensory conditions are presented in random order, some trials repeat the same sensory-motor association (e.g. an auditory followed by an auditory trial), while others switch this association (e.g. an auditory followed by a visual trial). This switch may slow down unisensory reaction times and inflate the observed RSE. Following this line of ideas, we used an audio-visual detection task and quantified the RSE for trials derived from pure unisensory blocks and trials from mixed blocks involving a repeat or switch of modalities. The RSE was largest for switch trials and smallest for unisensory trials. In fact, during unisensory blocks the multisensory reaction times did not differ from predictions by the race model, speaking against a genuine multisensory benefit. These results confirm that the observed multisensory RSE can be easily confounded by task switching costs, and suggest that the true benefit of multisensory stimuli for reaction speed may often be overestimated.


Autism ◽  
2019 ◽  
Vol 23 (8) ◽  
pp. 2055-2067 ◽  
Author(s):  
Cari-lène Mul ◽  
Flavia Cardini ◽  
Steven D Stagg ◽  
Shabnam Sadeghi Esfahlani ◽  
Dimitrios Kiourtsoglou ◽  
...  

There is some evidence that disordered self-processing in autism spectrum disorders is linked to the social impairments characteristic of the condition. To investigate whether bodily self-consciousness is altered in autism spectrum disorders as a result of multisensory processing differences, we tested responses to the full body illusion and measured peripersonal space in 22 adults with autism spectrum disorders and 29 neurotypical adults. In the full body illusion set-up, participants wore a head-mounted display showing a view of their ‘virtual body’ being stroked synchronously or asynchronously with respect to felt stroking on their back. After stroking, we measured the drift in perceived self-location and self-identification with the virtual body. To assess the peripersonal space boundary we employed an audiotactile reaction time task. The results showed that participants with autism spectrum disorders are markedly less susceptible to the full body illusion, not demonstrating the illusory self-identification and self-location drift. Strength of self-identification was negatively correlated with severity of autistic traits and contributed positively to empathy scores. The results also demonstrated a significantly smaller peripersonal space, with a sharper (steeper) boundary, in autism spectrum disorders participants. These results suggest that bodily self-consciousness is altered in participants with autism spectrum disorders due to differences in multisensory integration, and this may be linked to deficits in social functioning.


Sign in / Sign up

Export Citation Format

Share Document