A User Study on View-sharing Techniques for One-to-Many Mixed Reality Collaborations

Author(s):  
Geonsun Lee ◽  
HyeongYeop Kang ◽  
JongMin Lee ◽  
JungHyun Han
2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


Author(s):  
Steve Beitzel ◽  
Josiah Dykstra ◽  
Paul Toliver ◽  
Jason Youzwak

We investigate the feasibility of using Microsoft HoloLens, a mixed reality device, to visually analyze network capture data and locate anomalies. We developed MINER, a prototype application to visualize details from network packet captures as 3D stereogram charts. MINER employs a novel approach to time-series visualization that extends the time dimension across two axes, thereby taking advantage of the immersive 3D space available via the HoloLens. Users navigate the application through eye gaze and hand gestures to view summary and detailed bar graphs. Callouts display additional detail based on the user’s immediate gaze. In a user study, volunteers used MINER to locate network attacks in a dataset from the 2013 VAST Challenge. We compared the time and effort with a similar test using traditional tools on a desktop computer. Our findings suggest that network anomaly analysis with the HoloLens achieved comparable effectiveness, efficiency and satisfaction. We describe user metrics and feedback collected from these experiments; lessons learned and suggested future work.


2021 ◽  
Author(s):  
Hye Jin Kim

<p><b>Telepresence systems enable people to feel present in a remote space while their bodies remain in their local space. To enhance telepresence, the remote environment needs to be captured and visualised in an immersive way. For instance, 360-degree videos (360-videos) shown on head-mounted displays (HMDs) provide high fidelity telepresence in a remote place. Mixed reality (MR) in 360-videos enables interactions with virtual objects blended in the captured remote environment while it allows telepresence only for a single user wearing HMD. For this reason, it has limitations when multiple users want to experience telepresence together and naturally collaborate within a teleported space. </b></p><p>This thesis presents TeleGate, a novel multi-user teleportation platform for remote collaboration in a MR space. TeleGate provides "semi-teleportation" into the MR space using large-scale displays, acting as a bridge between the local physical communication space and the remote collaboration space created by MR with captured 360-videos. Our proposed platform enables multi-user semi-teleportation to perform collaborative tasks in the remote MR collaboration (MRC) space while allowing for natural communication between collaborators in the same local physical space. </p><p>We implemented a working prototype of TeleGate and then conducted a user study to evaluate our concept of semi-teleportation. We measured the spatial presence, social presence while participants performed remote collaborative tasks in the MRC space. Additionally, we also explored the different control mechanisms within the platform in the remote MR collaboration scenario. </p><p>In conclusion, TeleGate enabled multiple co-located users to semi-teleport together using large-scale displays for remote collaboration in MR 360-videos.</p>


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4006
Author(s):  
Razeen Hussain ◽  
Manuela Chessa ◽  
Fabio Solari

Cybersickness is one of the major roadblocks in the widespread adoption of mixed reality devices. Prolonged exposure to these devices, especially virtual reality devices, can cause users to feel discomfort and nausea, spoiling the immersive experience. Incorporating spatial blur in stereoscopic 3D stimuli has shown to reduce cybersickness. In this paper, we develop a technique to incorporate spatial blur in VR systems inspired by the human physiological system. The technique makes use of concepts from foveated imaging and depth-of-field. The developed technique can be applied to any eye tracker equipped VR system as a post-processing step to provide an artifact-free scene. We verify the usefulness of the proposed system by conducting a user study on cybersickness evaluation. We used a custom-built rollercoaster VR environment developed in Unity and an HTC Vive Pro Eye headset to interact with the user. A Simulator Sickness Questionnaire was used to measure the induced sickness while gaze and heart rate data were recorded for quantitative analysis. The experimental analysis highlighted the aptness of our foveated depth-of-field effect in reducing cybersickness in virtual environments by reducing the sickness scores by approximately 66%.


2021 ◽  
Author(s):  
Hye Jin Kim

<p><b>Telepresence systems enable people to feel present in a remote space while their bodies remain in their local space. To enhance telepresence, the remote environment needs to be captured and visualised in an immersive way. For instance, 360-degree videos (360-videos) shown on head-mounted displays (HMDs) provide high fidelity telepresence in a remote place. Mixed reality (MR) in 360-videos enables interactions with virtual objects blended in the captured remote environment while it allows telepresence only for a single user wearing HMD. For this reason, it has limitations when multiple users want to experience telepresence together and naturally collaborate within a teleported space. </b></p><p>This thesis presents TeleGate, a novel multi-user teleportation platform for remote collaboration in a MR space. TeleGate provides "semi-teleportation" into the MR space using large-scale displays, acting as a bridge between the local physical communication space and the remote collaboration space created by MR with captured 360-videos. Our proposed platform enables multi-user semi-teleportation to perform collaborative tasks in the remote MR collaboration (MRC) space while allowing for natural communication between collaborators in the same local physical space. </p><p>We implemented a working prototype of TeleGate and then conducted a user study to evaluate our concept of semi-teleportation. We measured the spatial presence, social presence while participants performed remote collaborative tasks in the MRC space. Additionally, we also explored the different control mechanisms within the platform in the remote MR collaboration scenario. </p><p>In conclusion, TeleGate enabled multiple co-located users to semi-teleport together using large-scale displays for remote collaboration in MR 360-videos.</p>


2021 ◽  
Vol 2 ◽  
Author(s):  
Gonzalo Suárez ◽  
Sungchul Jung ◽  
Robert W. Lindeman

This article reports on a study to evaluate the effectiveness of virtual human (VH) role-players as leadership training tools within two computer-generated environments, virtual reality (VR) and mixed reality (MR), compared to a traditional training method, real human (RH) role-players in a real-world (RW) environment. We developed an experimental training platform to assess the three conditions: RH role-players in RW (RH-RW), VH role-players in VR (VH-VR), and VH role-players in MR (VH-MR), during two practice-type opportunities, namely pre-session and post-session. We conducted a user study where 30 participants played the role of leaders in interacting with either RHs or VHs before and after receiving a leadership training session. We then investigated (1) if VH role-players were as effective as RH role-players during pre- and post-sessions, and (2) the impact that the human-type (RH, VH) in conjunction with the environment-type (RW, VR, MR) had on the outcomes. We also collected user reactions and learning data from the overall training experience. The results showed a regular increase in performance from pre- to post-sessions in all three conditions. However, we did not find a significant difference between VHs and RHs. Interestingly, the VH-MR condition had a more significant influence on performance and task engagement compared to the VH-VR and RH-RW conditions. Based on our findings, we conclude that VH role-players can be as effective as RH role-players to support the practice of leadership skills, where VH-MR could be the best method due to its effectiveness.


2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-23
Author(s):  
Jim Smiley ◽  
Benjamin Lee ◽  
Siddhant Tandon ◽  
Maxime Cordeil ◽  
Lonni Besançon ◽  
...  

Tangible controls-especially sliders and rotary knobs-have been explored in a wide range of interactive applications for desktop and immersive environments. Studies have shown that they support greater precision and provide proprioceptive benefits, such as support for eyes-free interaction. However, such controls tend to be expressly designed for specific applications. We draw inspiration from a bespoke controller for immersive data visualisation, but decompose this design into a simple, wireless, composable unit featuring two actuated sliders and a rotary encoder. Through these controller units, we explore the interaction opportunities around actuated sliders; supporting precise selection, infinite scrolling, adaptive data representations, and rich haptic feedback; all within a mode-less interaction space. We demonstrate the controllers' use for simple, ad hoc desktop interaction,before moving on to more complex, multi-dimensional interactions in VR and AR. We show that the flexibility and composability of these actuated controllers provides an emergent design space which covers the range of interactive dynamics for visual analysis. In a user study involving pairs performing collaborative visual analysis tasks in mixed-reality, our participants were able to easily compose rich visualisations, make insights and discuss their findings.


2022 ◽  
Vol 29 (2) ◽  
pp. 1-39
Author(s):  
Mark McGill ◽  
Stephen Brewster ◽  
Daniel Pires De Sa Medeiros ◽  
Sidney Bovet ◽  
Mario Gutierrez ◽  
...  

This article discusses the Keyboard Augmentation Toolkit (KAT), which supports the creation of virtual keyboards that can be used both for standalone input (e.g., for mid-air text entry) and to augment physically tracked keyboards/surfaces in mixed reality. In a user study, we firstly examine the impact and pitfalls of visualising shortcuts on a tracked physical keyboard, exploring the utility of virtual per-keycap displays. Supported by this and other recent developments in XR keyboard research, we then describe the design, development, and evaluation-by-demonstration of KAT. KAT simplifies the creation of virtual keyboards (optionally bound to a tracked physical keyboard) that support enhanced display —2D/3D per-key content that conforms to the virtual key bounds; enhanced interactivity —supporting extensible per-key states such as tap, dwell, touch, swipe; flexible keyboard mappings that can encapsulate groups of interaction and display elements, e.g., enabling application-dependent interactions; and flexible layouts —allowing the virtual keyboard to merge with and augment a physical keyboard, or switch to an alternate layout (e.g., mid-air) based on need. Through these features, KAT will assist researchers in the prototyping, creation and replication of XR keyboard experiences, fundamentally altering the keyboard’s form and function.


Author(s):  
João Cartucho ◽  
David Shapira ◽  
Hutan Ashrafian ◽  
Stamatia Giannarou

Abstract Purpose In the last decade, there has been a great effort to bring mixed reality (MR) into the operating room to assist surgeons intraoperatively. However, progress towards this goal is still at an early stage. The aim of this paper is to propose a MR visualisation platform which projects multiple imaging modalities to assist intraoperative surgical guidance. Methodology In this work, a MR visualisation platform has been developed for the Microsoft HoloLens. The platform contains three visualisation components, namely a 3D organ model, volumetric data, and tissue morphology captured with intraoperative imaging modalities. Furthermore, a set of novel interactive functionalities have been designed including scrolling through volumetric data and adjustment of the virtual objects’ transparency. A pilot user study has been conducted to evaluate the usability of the proposed platform in the operating room. The participants were allowed to interact with the visualisation components and test the different functionalities. Each surgeon answered a questionnaire on the usability of the platform and provided their feedback and suggestions. Results The analysis of the surgeons’ scores showed that the 3D model is the most popular MR visualisation component and neurosurgery is the most relevant speciality for this platform. The majority of the surgeons found the proposed visualisation platform intuitive and would use it in their operating rooms for intraoperative surgical guidance. Our platform has several promising potential clinical applications, including vascular neurosurgery. Conclusion The presented pilot study verified the potential of the proposed visualisation platform and its usability in the operating room. Our future work will focus on enhancing the platform by incorporating the surgeons’ suggestions and conducting extensive evaluation on a large group of surgeons.


Sign in / Sign up

Export Citation Format

Share Document