scholarly journals How Third-Person Experiences in Immersive Virtual Reality Influence Memory

2020 ◽  
Author(s):  
Heather Iriye ◽  
Peggy L. St. Jacques

We typically experience the world from a first-person perspective (1PP) but can sometimes experience events from a third-person perspective (3PP) much as an observer might see us. Little is known about how visual perspective influences the formation of memories for events. We developed an immersive virtual reality paradigm to examine how visual perspective during encoding influences memories. Across two studies, participants explored immersive virtual environments from first-person and third-person avatar perspectives while wearing an Oculus Rift headset. Memory was tested immediately (Study One and Study Two) and following a one-week delay (Study Two). We assessed the accuracy of visual memory using cued recall questions and spatial memory by asking participants to draw maps of the layout of each environment (Study One and Study Two). Additional phenomenological ratings were included to assess visual perspective during remembering (Study Two). There were no differences in the accuracy of visual information across the two studies, but 3PP experiences increased spatial memory accuracy compared to 1PP experiences. Our results also demonstrate that 3PP experiences create 3PP memories, as reflected by an increase in subjective ratings of observer-like perspectives during remembering. In sum, visual perspective during memory formation influences the accuracy of spatial but not visual information, and the vantage point of memories during remembering.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Heather Iriye ◽  
Peggy L. St. Jacques

AbstractWe typically experience the world from a first-person perspective (1PP) but can sometimes experience events from a third-person perspective (3PP) much as an observer might see us. Little is known about how visual perspective influences the formation of memories for events. We developed an immersive virtual reality paradigm to examine how visual perspective during encoding influences memories. Across two studies, participants explored immersive virtual environments from first-person and third-person avatar perspectives while wearing an Oculus Rift headset. Memory was tested immediately (Study One and Study Two) and following a one-week delay (Study Two). We assessed the accuracy of visual memory using cued recall questions and spatial memory by asking participants to draw maps of the layout of each environment (Study One and Study Two). Additional phenomenological ratings were included to assess visual perspective during remembering (Study Two). There were no differences in the accuracy of visual information across the two studies, but 3PP experiences were found to increase spatial memory accuracy due to their wider camera field of view when compared to 1PP experiences. Our results also demonstrate that 3PP experiences create 3PP memories, as reflected by an increase in subjective ratings of observer-like perspectives during remembering. In sum, visual perspective during memory formation influences the accuracy of spatial but not visual information, and the vantage point of memories during remembering.


2019 ◽  
Author(s):  
Carl Michael Orquiola Galang ◽  
Sukhvinder S. Obhi ◽  
Michael Jenkins

Previous neurophysiological research suggests that there are event-related potential (ERP) components are associated with empathy for pain: early affective component (N2) and two late cognitive components (P3/LPP). The current study investigated whether and how the visual perspective from which a painful event is observed affects these ERP components. Participants viewed images of hands in pain vs. not in pain from a first-person or third-person perspective. We found that visual perspective influences both the early and late components. In the early component (N2), there was a larger mean amplitude during observation of pain vs no-pain exclusively when images were shown from a first-person perspective. We suggest that this effect may be driven by misattributing the on-screen hand to oneself. For the late component (P3), we found a larger effect of pain on mean amplitudes in response to third-person relative to first-person images. We speculate that the P3 may reflect a later process that enables effective recognition of others’ pain in the absence of misattribution. We discuss our results in relation to self- vs other-related processing by questioning whether these ERP components are truly indexing empathy (an other-directed process) or a simple misattribution of another’s pain as one’s own (a self-directed process).


10.2196/18888 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e18888
Author(s):  
Susanne M van der Veen ◽  
Alexander Stamenkovic ◽  
Megan E Applegate ◽  
Samuel T Leitkam ◽  
Christopher R France ◽  
...  

Background Visual representation of oneself is likely to affect movement patterns. Prior work in virtual dodgeball showed greater excursion of the ankles, knees, hips, spine, and shoulder occurs when presented in the first-person perspective compared to the third-person perspective. However, the mode of presentation differed between the two conditions such that a head-mounted display was used to present the avatar in the first-person perspective, but a 3D television (3DTV) display was used to present the avatar in the third-person. Thus, it is unknown whether changes in joint excursions are driven by the visual display (head-mounted display versus 3DTV) or avatar perspective during virtual gameplay. Objective This study aimed to determine the influence of avatar perspective on joint excursion in healthy individuals playing virtual dodgeball using a head-mounted display. Methods Participants (n=29, 15 male, 14 female) performed full-body movements to intercept launched virtual targets presented in a game of virtual dodgeball using a head-mounted display. Two avatar perspectives were compared during each session of gameplay. A first-person perspective was created by placing the center of the displayed content at the bridge of the participant’s nose, while a third-person perspective was created by placing the camera view at the participant’s eye level but set 1 m behind the participant avatar. During gameplay, virtual dodgeballs were launched at a consistent velocity of 30 m/s to one of nine locations determined by a combination of three different intended impact heights and three different directions (left, center, or right) based on subject anthropometrics. Joint kinematics and angular excursions of the ankles, knees, hips, lumbar spine, elbows, and shoulders were assessed. Results The change in joint excursions from initial posture to the interception of the virtual dodgeball were averaged across trials. Separate repeated-measures ANOVAs revealed greater excursions of the ankle (P=.010), knee (P=.001), hip (P=.0014), spine (P=.001), and shoulder (P=.001) joints while playing virtual dodgeball in the first versus third-person perspective. Aligning with the expectations, there was a significant effect of impact height on joint excursions. Conclusions As clinicians develop treatment strategies in virtual reality to shape motion in orthopedic populations, it is important to be aware that changes in avatar perspective can significantly influence motor behavior. These data are important for the development of virtual reality assessment and treatment tools that are becoming increasingly practical for home and clinic-based rehabilitation.


2021 ◽  
Vol 2 ◽  
Author(s):  
Collin Turbyne ◽  
Pelle de Koning ◽  
Dirk Smit ◽  
Damiaan Denys

Background: Virtual reality (VR) has been previously shown as a means to mitigate acute pain. The critical parameters involved in the clinical efficacy of mitigating acute pain from different perspectives remains unknown. This study attempted to further deconstruct the critical parameters involved in mitigating acute pain by investigating whether affective and physiological responses to painful stimuli differed between a first and a third person perspective in virtual reality.Methods: Two conditions were compared in a repeated-measures within subject study design for 17 healthy participants: First person perspective (i.e., where participants experienced their bodies from an anatomical and egocentric perspective) and third person perspective (i.e., where participants experienced their bodies from an anatomical perspective from across the room). The participants received noxious electrical stimulation at pseudorandom intervals and anatomical locations during both conditions. Physiological stress responses were measured by means of electrocardiography (ECG) and impedance cardiography (ICG). Subjective scores measuring tension, pain, anger, and fear were reported after every block sequence.Results: There were no significant differences in physiological stress responses between conditions. However, the participants reported significantly higher tension during the third person condition.Conclusion: Relative to a third person perspective, there are no distinct physiological benefits to inducing a first person perspective to mitigate physiological stress responses to acute pain in healthy individuals. However, there may be additional clinical benefits for doing so in specific clinical populations that have shown to benefit from relaxation techniques. Further research is needed in order to refine the clinical utility of different perspectives during virtual reality immersions that serve to act as a non-pharmacological analgesic during acute pain.


2021 ◽  
Author(s):  
Sima Ebrahimian ◽  
Bradley Mattan ◽  
Mazaher Rezaei

Abstract Background: Lack of empathy is one of the main characteristics of narcissists. However, it is not clear whether there is a similar deficit in other facets of mentalizing, such as perspective-taking.Method: In this study, we measured the taking visual perspectives ascribed to different targets (e.g., first-person self, third-person self-avatar, and third-person stranger avatar). Our study focused on separate groups of individuals with high and low self-reported narcissistic traits. Results: Participants reporting high Narcissism scores showed higher accuracy in a third-person perspective-taking task than did their low-Narcissism counterparts. However, when the first-person perspective was incongruent with the third-person (first person vs. self- tagged avatar), the accuracy of their responses decreased.Conclusions: The discrepancy between the two types of perspective taking of people with high narcissism can probably mean that the narcissistic people perfectly identify / empathize with one object (person, avatar, character, etc.) and therefore their perspective-taking is disrupted when they need to identify with more than one object that represent their self-attributed perspectives.


2022 ◽  
pp. 82-97
Author(s):  
Maxime Ros ◽  
Lorenz S. Neuwirth

The advancement of virtual reality (VR) technology for educational instruction and curricular (re)design have become highly attractive and newly demanding areas of both the technology and healthcare industries. However, the quickly evolving field is still learning about each of the associated VR technologies, whether they are evidence-based, and how they are validated to decrease cognitive load and in turn increase student/learner comprehension. Likewise, the instructional (re)design of the content that the student/learner is exposed to in VR, and whether it is immersive, and promotes memorable content and experiences can influence their learning outcomes. Here the Revinax® Handbook content library that is displayed in an immersive virtual reality application in first-person point-of-view (IVRA-FPV) is contrasted with third-person point-of-view (IVRA-TPV) through VR headsets to an individual, and computer displays to many individuals along with augmented reality (AR) are evaluated as emerging advancements in the field of VR and AR.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Kazuto Nakashima ◽  
Yumi Iwashita ◽  
Ryo Kurazume

Abstract Automatic analysis of our daily lives and activities through a first-person lifelog camera provides us with opportunities to improve our life rhythms or to support our limited visual memories. Notably, to express the visual experiences, the task of generating captions from first-person lifelog images has been actively studied in recent years. First-person images involve scenes approximating what users actually see; therein, the visual cues are not enough to express the user’s context since the images are limited by his/her intention. Our challenge is to generate lifelog captions using a meta-perspective called “fourth-person vision”. The “fourth-person vision” is a novel concept which complementary exploits the visual information from the first-, second-, and third-person perspectives. First, we assume human–robot symbiotic scenarios that provide a second-person perspective from the camera mounted on the robot and a third-person perspective from the camera fixed in the symbiotic room. To validate our approach in this scenario, we collect perspective-aware lifelog videos and corresponding caption annotations. Subsequently, we propose a multi-perspective image captioning model composed of an image-wise salient region encoder, an attention module that adaptively fuses the salient regions, and a caption decoder that generates scene descriptions. We demonstrate that our proposed model based on the fourth-person concept can greatly improve the captioning performance against single- and double-perspective models.


Author(s):  
Amanda J. Haskins ◽  
Jeff Mentch ◽  
Thomas L. Botch ◽  
Caroline E. Robertson

AbstractVision is an active process. Humans actively sample their sensory environment via saccades, head turns, and body movements. Yet, little is known about active visual processing in real-world environments. Here, we exploited recent advances in immersive virtual reality (VR) and in-headset eye-tracking to show that active viewing conditions impact how humans process complex, real-world scenes. Specifically, we used quantitative, model-based analyses to compare which visual features participants prioritize over others while encoding a novel environment in two experimental conditions: active and passive. In the active condition, participants used head-mounted VR displays to explore 360º scenes from a first-person perspective via self-directed motion (saccades and head turns). In the passive condition, 360º scenes were passively displayed to participants within the VR headset while they were head-restricted. Our results show that signatures of top-down attentional guidance increase in active viewing conditions: active viewers disproportionately allocate their attention to semantically relevant scene features, as compared with passive viewers. We also observed increased signatures of exploratory behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results have broad implications for studies of visual cognition, suggesting that active viewing influences every aspect of gaze behavior – from the way we move our eyes to what we choose to attend to – as we construct a sense of place in a real-world environment.Significance StatementEye-tracking in immersive virtual reality offers an unprecedented opportunity to study human gaze behavior under naturalistic viewing conditions without sacrificing experimental control. Here, we advanced this new technique to show how humans deploy attention as they encode a diverse set of 360º, real-world scenes, actively explored from a first-person perspective using head turns and saccades. Our results build on classic studies in psychology, showing that active, as compared with passive, viewing conditions fundamentally alter perceptual processing. Specifically, active viewing conditions increase information-seeking behavior in humans, producing faster, more entropic fixations, which are disproportionately deployed to scene areas that are rich in semantic meaning. In addition, our results offer key benchmark measurements of gaze behavior in 360°, naturalistic environments.


Sign in / Sign up

Export Citation Format

Share Document