scholarly journals Virtual, Real or Mixed: How Surrounding Objects Influence the Sense of Embodiment in Optical See-Through Experiences?

2021 ◽  
Vol 2 ◽  
Author(s):  
Adélaïde Genay ◽  
Anatole Lécuyer ◽  
Martin Hachet

This paper studies the sense of embodiment of virtual avatars in Mixed Reality (MR) environments visualized with an Optical See-Through display. We investigated whether the content of the surrounding environment could impact the user’s perception of their avatar, when embodied from a first-person perspective. To do so, we conducted a user study comparing the sense of embodiment toward virtual robot hands in three environment contexts which included progressive quantities of virtual content: real content only, mixed virtual/real content, and virtual content only. Taken together, our results suggest that users tend to accept virtual hands as their own more easily when the environment contains both virtual and real objects (mixed context), allowing them to better merge the two “worlds”. We discuss these results and raise research questions for future work to consider.

Author(s):  
Martijn Kors ◽  
Gabriele Ferri ◽  
Erik D. van der Spek ◽  
Cas Ketel ◽  
Ben Schouten

Persuasive games are designed for a variety of objectives, from marketing to healthcare and activism. Some of the more socially aware ones cast players as members of disenfranchised minorities, prompting them to see what they see. In parallel, designers have started to leverage system-immersion to enable players to temporarily feel like another person, to sense what they sense. From these converging perspectives, we hypothesize a stilluncharted space of opportunities at the crossroads of games, empathy, persuasion, and system-immersion. We explored this space by designing A Breathtaking Journey, a mixed-reality game providing a first-person perspective of a refugee’s journey. A qualitative study was conducted to tease out empathy-arousing characteristics, provide insights on empathic experiences, and contribute three design opportunities: visceral engagement, reflective moments, and affective appeals.


Author(s):  
Steve Beitzel ◽  
Josiah Dykstra ◽  
Paul Toliver ◽  
Jason Youzwak

We investigate the feasibility of using Microsoft HoloLens, a mixed reality device, to visually analyze network capture data and locate anomalies. We developed MINER, a prototype application to visualize details from network packet captures as 3D stereogram charts. MINER employs a novel approach to time-series visualization that extends the time dimension across two axes, thereby taking advantage of the immersive 3D space available via the HoloLens. Users navigate the application through eye gaze and hand gestures to view summary and detailed bar graphs. Callouts display additional detail based on the user’s immediate gaze. In a user study, volunteers used MINER to locate network attacks in a dataset from the 2013 VAST Challenge. We compared the time and effort with a similar test using traditional tools on a desktop computer. Our findings suggest that network anomaly analysis with the HoloLens achieved comparable effectiveness, efficiency and satisfaction. We describe user metrics and feedback collected from these experiments; lessons learned and suggested future work.


2010 ◽  
Vol 19 (6) ◽  
pp. 499-512 ◽  
Author(s):  
Carlos Andujar

The world-in-miniature metaphor (WIM) allows users to select, manipulate, and navigate efficiently in virtual environments. In addition to the first-person perspective offered by typical virtual reality (VR) applications, the WIM offers a second dynamic viewpoint through a hand-held miniature copy of the environment. In this paper we explore different strategies to allow the user to interact with the miniature replica at multiple levels of scale. Unlike competing approaches, we support complex indoor environments by explicitly handling occlusion. We discuss algorithms for selecting the part of the scene to be included in the replica, and for providing a clear view of the region of interest. Key elements of our approach include an algorithm to recompute the active region from a subdivision of the scene into cells, and a view-dependent algorithm to cull occluding geometry. Our cutaway algorithm is based on a small set of slicing planes roughly oriented along the main occluding surfaces, along with depthbased revealing for nonplanar geometry. We present the results of a user study showing that our technique clearly outperforms competing approaches on spatial tasks performed in densely occluded scenes.


2021 ◽  
Vol 2 ◽  
Author(s):  
David R. Labbe ◽  
Kean Kouakoua ◽  
Rachid Aissaoui ◽  
Sylvie Nadeau ◽  
Cyril Duclos

When immersed in virtual reality, users who view their body as a co-located virtual avatar that reflects their movements, generally develop a sense of embodiment whereby they perceive the virtual body to be their own. One aspect of the sense of embodiment is the feeling of agency over the avatar, i.e., the feeling that one is producing the movements of the avatar. In contexts such as physical rehabilitation, telepresence and gaming, it may be useful to induce a strong sense of agency in users who cannot produce movements or for whom it is not practical to do so. Being able to feel agency over a walking avatar without having to produce walking movements could be especially valuable. Muscle vibrations have been shown to produce the proprioceptive perception of movements, without any movement on the part of the user. The objectives of the current study were to: 1-determine if the addition of lower-limb muscle-vibrations with gait-like patterns to a walking avatar can increase the illusory perception of walking in healthy individuals who are standing still; 2-compare the effects of the complexity of the vibration patterns and of their synchronicity on the sense of agency and on the illusory perception of walking. Thirty participants viewed a walking avatar from a first-person perspective, either without muscle vibrations or with one of four different patterns of vibrations. These five conditions were presented pairwise in a two-alternative forced choice paradigm and individually presented, after which participants answered an embodiment questionnaire. The displacement of center of pressure of the participants was measured throughout the experiment. The results show that all patterns of proprioceptive stimulation increased the sense of agency to a similar degree. However, the condition in which the proprioceptive feedback was realistic and temporally aligned with the avatar’s leg movements led to significantly larger anteroposterior sway of the center of pressure. The frequency of this sway matched the cadence of the avatar’s gait. Thus, congruent and realistic proprioceptive stimulation increases the feeling of agency, the illusory perception of walking and the motor responses of the participants when viewing a walking avatar from a first-person perspective.


2021 ◽  
Vol 11 (4) ◽  
pp. 521
Author(s):  
Jonathan Erez ◽  
Marie-Eve Gagnon ◽  
Adrian M. Owen

Investigating human consciousness based on brain activity alone is a key challenge in cognitive neuroscience. One of its central facets, the ability to form autobiographical memories, has been investigated through several fMRI studies that have revealed a pattern of activity across a network of frontal, parietal, and medial temporal lobe regions when participants view personal photographs, as opposed to when they view photographs from someone else’s life. Here, our goal was to attempt to decode when participants were re-experiencing an entire event, captured on video from a first-person perspective, relative to a very similar event experienced by someone else. Participants were asked to sit passively in a wheelchair while a researcher pushed them around a local mall. A small wearable camera was mounted on each participant, in order to capture autobiographical videos of the visit from a first-person perspective. One week later, participants were scanned while they passively viewed different categories of videos; some were autobiographical, while others were not. A machine-learning model was able to successfully classify the video categories above chance, both within and across participants, suggesting that there is a shared mechanism differentiating autobiographical experiences from non-autobiographical ones. Moreover, the classifier brain maps revealed that the fronto-parietal network, mid-temporal regions and extrastriate cortex were critical for differentiating between autobiographical and non-autobiographical memories. We argue that this novel paradigm captures the true nature of autobiographical memories, and is well suited to patients (e.g., with brain injuries) who may be unable to respond reliably to traditional experimental stimuli.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Doerte Kuhrt ◽  
Natalie R. St. John ◽  
Jacob L. S. Bellmund ◽  
Raphael Kaplan ◽  
Christian F. Doeller

AbstractAdvances in virtual reality (VR) technology have greatly benefited spatial navigation research. By presenting space in a controlled manner, changing aspects of the environment one at a time or manipulating the gain from different sensory inputs, the mechanisms underlying spatial behaviour can be investigated. In parallel, a growing body of evidence suggests that the processes involved in spatial navigation extend to non-spatial domains. Here, we leverage VR technology advances to test whether participants can navigate abstract knowledge. We designed a two-dimensional quantity space—presented using a head-mounted display—to test if participants can navigate abstract knowledge using a first-person perspective navigation paradigm. To investigate the effect of physical movement, we divided participants into two groups: one walking and rotating on a motion platform, the other group using a gamepad to move through the abstract space. We found that both groups learned to navigate using a first-person perspective and formed accurate representations of the abstract space. Interestingly, navigation in the quantity space resembled behavioural patterns observed in navigation studies using environments with natural visuospatial cues. Notably, both groups demonstrated similar patterns of learning. Taken together, these results imply that both self-movement and remote exploration can be used to learn the relational mapping between abstract stimuli.


Philosophies ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 5
Author(s):  
S. J. Blodgett-Ford

The phenomenon and ethics of “voting” will be explored in the context of human enhancements. “Voting” will be examined for enhanced humans with moderate and extreme enhancements. Existing patterns of discrimination in voting around the globe could continue substantially “as is” for those with moderate enhancements. For extreme enhancements, voting rights could be challenged if the very humanity of the enhanced was in doubt. Humans who were not enhanced could also be disenfranchised if certain enhancements become prevalent. Voting will be examined using a theory of engagement articulated by Professor Sophie Loidolt that emphasizes the importance of legitimization and justification by “facing the appeal of the other” to determine what is “right” from a phenomenological first-person perspective. Seeking inspiration from the Universal Declaration of Human Rights (UDHR) of 1948, voting rights and responsibilities will be re-framed from a foundational working hypothesis that all enhanced and non-enhanced humans should have a right to vote directly. Representative voting will be considered as an admittedly imperfect alternative or additional option. The framework in which voting occurs, as well as the processes, temporal cadence, and role of voting, requires the participation from as diverse a group of humans as possible. Voting rights delivered by fiat to enhanced or non-enhanced humans who were excluded from participation in the design and ratification of the governance structure is not legitimate. Applying and extending Loidolt’s framework, we must recognize the urgency that demands the impossible, with openness to that universality in progress (or universality to come) that keeps being constituted from the outside.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


Sign in / Sign up

Export Citation Format

Share Document