A User Study on MR Remote Collaboration Using Live 360 Video

Author(s):  
Gun A. Lee ◽  
Theophilus Teo ◽  
Seungwon Kim ◽  
Mark Billinghurst
2021 ◽  
Author(s):  
Hye Jin Kim

<p><b>Telepresence systems enable people to feel present in a remote space while their bodies remain in their local space. To enhance telepresence, the remote environment needs to be captured and visualised in an immersive way. For instance, 360-degree videos (360-videos) shown on head-mounted displays (HMDs) provide high fidelity telepresence in a remote place. Mixed reality (MR) in 360-videos enables interactions with virtual objects blended in the captured remote environment while it allows telepresence only for a single user wearing HMD. For this reason, it has limitations when multiple users want to experience telepresence together and naturally collaborate within a teleported space. </b></p><p>This thesis presents TeleGate, a novel multi-user teleportation platform for remote collaboration in a MR space. TeleGate provides "semi-teleportation" into the MR space using large-scale displays, acting as a bridge between the local physical communication space and the remote collaboration space created by MR with captured 360-videos. Our proposed platform enables multi-user semi-teleportation to perform collaborative tasks in the remote MR collaboration (MRC) space while allowing for natural communication between collaborators in the same local physical space. </p><p>We implemented a working prototype of TeleGate and then conducted a user study to evaluate our concept of semi-teleportation. We measured the spatial presence, social presence while participants performed remote collaborative tasks in the MRC space. Additionally, we also explored the different control mechanisms within the platform in the remote MR collaboration scenario. </p><p>In conclusion, TeleGate enabled multiple co-located users to semi-teleport together using large-scale displays for remote collaboration in MR 360-videos.</p>


2021 ◽  
Author(s):  
Hye Jin Kim

<p><b>Telepresence systems enable people to feel present in a remote space while their bodies remain in their local space. To enhance telepresence, the remote environment needs to be captured and visualised in an immersive way. For instance, 360-degree videos (360-videos) shown on head-mounted displays (HMDs) provide high fidelity telepresence in a remote place. Mixed reality (MR) in 360-videos enables interactions with virtual objects blended in the captured remote environment while it allows telepresence only for a single user wearing HMD. For this reason, it has limitations when multiple users want to experience telepresence together and naturally collaborate within a teleported space. </b></p><p>This thesis presents TeleGate, a novel multi-user teleportation platform for remote collaboration in a MR space. TeleGate provides "semi-teleportation" into the MR space using large-scale displays, acting as a bridge between the local physical communication space and the remote collaboration space created by MR with captured 360-videos. Our proposed platform enables multi-user semi-teleportation to perform collaborative tasks in the remote MR collaboration (MRC) space while allowing for natural communication between collaborators in the same local physical space. </p><p>We implemented a working prototype of TeleGate and then conducted a user study to evaluate our concept of semi-teleportation. We measured the spatial presence, social presence while participants performed remote collaborative tasks in the MRC space. Additionally, we also explored the different control mechanisms within the platform in the remote MR collaboration scenario. </p><p>In conclusion, TeleGate enabled multiple co-located users to semi-teleport together using large-scale displays for remote collaboration in MR 360-videos.</p>


2020 ◽  
Vol 32 (2) ◽  
pp. 153-169
Author(s):  
Peng Wang ◽  
Xiaoliang Bai ◽  
Mark Billinghurst ◽  
Shusheng Zhang ◽  
Weiping He ◽  
...  

Abstract This paper investigates the effect of using augmented reality (AR) annotations and two different gaze visualizations, head pointer (HP) and eye gaze (EG), in an AR system for remote collaboration on physical tasks. First, we developed a spatial AR remote collaboration platform that supports sharing the remote expert’s HP or EG cues. Then the prototype system was evaluated with a user study comparing three conditions for sharing non-verbal cues: (1) a cursor pointer (CP), (2) HP and (3) EG with respect to task performance, workload assessment and user experience. We found that there was a clear difference between these three conditions in the performance time but no significant difference between the HP and EG conditions. When considering the perceived collaboration quality, the HP/EG interface was statistically significantly higher than the CP interface, but there was no significant difference for workload assessment between these three conditions. We used low-cost head tracking for the HP cue and found that this served as an effective referential pointer. This implies that in some circumstances, HP could be a good proxy for EG in remote collaboration. Head pointing is more accessible and cheaper to use than more expensive eye-tracking hardware and paves the way for multi-modal interaction based on HP and gesture in AR remote collaboration.


2008 ◽  
Vol 66 (5) ◽  
pp. 318-332 ◽  
Author(s):  
Jaka Sodnik ◽  
Christina Dicke ◽  
Sašo Tomažič ◽  
Mark Billinghurst
Keyword(s):  

2021 ◽  
Author(s):  
Marius Fechter ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractVirtual and augmented reality allows the utilization of natural user interfaces, such as realistic finger interaction, even for purposes that were previously dominated by the WIMP paradigm. This new form of interaction is particularly suitable for applications involving manipulation tasks in 3D space, such as CAD assembly modeling. The objective of this paper is to evaluate the suitability of natural interaction for CAD assembly modeling in virtual reality. An advantage of the natural interaction compared to the conventional operation by computer mouse would indicate development potential for user interfaces of current CAD applications. Our approach bases on two main elements. Firstly, a novel natural user interface for realistic finger interaction enables the user to interact with virtual objects similar to physical ones. Secondly, an algorithm automatically detects constraints between CAD components based solely on their geometry and spatial location. In order to prove the usability of the natural CAD assembly modeling approach in comparison with the assembly procedure in current WIMP operated CAD software, we present a comparative user study. Results show that the VR method including natural finger interaction significantly outperforms the desktop-based CAD application in terms of efficiency and ease of use.


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


2021 ◽  
Vol 11 (13) ◽  
pp. 6047
Author(s):  
Soheil Rezaee ◽  
Abolghasem Sadeghi-Niaraki ◽  
Maryam Shakeri ◽  
Soo-Mi Choi

A lack of required data resources is one of the challenges of accepting the Augmented Reality (AR) to provide the right services to the users, whereas the amount of spatial information produced by people is increasing daily. This research aims to design a personalized AR that is based on a tourist system that retrieves the big data according to the users’ demographic contexts in order to enrich the AR data source in tourism. This research is conducted in two main steps. First, the type of the tourist attraction where the users interest is predicted according to the user demographic contexts, which include age, gender, and education level, by using a machine learning method. Second, the correct data for the user are extracted from the big data by considering time, distance, popularity, and the neighborhood of the tourist places, by using the VIKOR and SWAR decision making methods. By about 6%, the results show better performance of the decision tree by predicting the type of tourist attraction, when compared to the SVM method. In addition, the results of the user study of the system show the overall satisfaction of the participants in terms of the ease-of-use, which is about 55%, and in terms of the systems usefulness, about 56%.


Author(s):  
Bernardo Breve ◽  
Stefano Cirillo ◽  
Mariano Cuofano ◽  
Domenico Desiato

AbstractGestural expressiveness plays a fundamental role in the interaction with people, environments, animals, things, and so on. Thus, several emerging application domains would exploit the interpretation of movements to support their critical designing processes. To this end, new forms to express the people’s perceptions could help their interpretation, like in the case of music. In this paper, we investigate the user’s perception associated with the interpretation of sounds by highlighting how sounds can be exploited for helping users in adapting to a specific environment. We present a novel algorithm for mapping human movements into MIDI music. The algorithm has been implemented in a system that integrates a module for real-time tracking of movements through a sample based synthesizer using different types of filters to modulate frequencies. The system has been evaluated through a user study, in which several users have participated in a room experience, yielding significant results about their perceptions with respect to the environment they were immersed.


Sign in / Sign up

Export Citation Format

Share Document