Real-time visual representations for mobile mixed reality remote collaboration

Author(s):  
Lei Gao ◽  
Huidong Bai ◽  
Weiping He ◽  
Mark Billinghurst ◽  
Robert W. Lindeman
2021 ◽  
Author(s):  
◽  
Stephen Thompson

<p>This thesis presents a novel system for enabling remote collaboration within a mixed reality environment. Since the increase of virtual and augmented reality headsets, there has been increased interest in improving remote collaboration. Systems have been proposed to use 3D geometry or 360° video for providing remotely collaborating users with a view of the local, real-world environment. However, many systems provide limited interactions in the local environment and target using coupled views of all users, rather than simulating face-to-face interactions, or use virtual environments for the remote user, losing visual realism.  The presented system enables a user situated in a remote location to join a local user to collaborate on a task. An omni-directional camera is streamed to the remote user in real-time to provide a live view of the local space. The 360° video is also used to provide believable lighting when compositing virtual objects into the real-world. Remote users are displayed to local users as an abstracted avatar to provide basic body gestures and social presence. Voice chat is also provided for verbal communication.  The system has been evaluated for technical performance and user experience. The evaluation found the performance of the system was suitable for real-time collaboration. Remote and local users were also found to have similar satisfaction with the system, experiencing high levels of presence, social presence and tele-presence. Shared cinematic and remote presentations are suggested as possible applications to guide further development of the system.</p>


2021 ◽  
Author(s):  
◽  
Stephen Thompson

<p>This thesis presents a novel system for enabling remote collaboration within a mixed reality environment. Since the increase of virtual and augmented reality headsets, there has been increased interest in improving remote collaboration. Systems have been proposed to use 3D geometry or 360° video for providing remotely collaborating users with a view of the local, real-world environment. However, many systems provide limited interactions in the local environment and target using coupled views of all users, rather than simulating face-to-face interactions, or use virtual environments for the remote user, losing visual realism.  The presented system enables a user situated in a remote location to join a local user to collaborate on a task. An omni-directional camera is streamed to the remote user in real-time to provide a live view of the local space. The 360° video is also used to provide believable lighting when compositing virtual objects into the real-world. Remote users are displayed to local users as an abstracted avatar to provide basic body gestures and social presence. Voice chat is also provided for verbal communication.  The system has been evaluated for technical performance and user experience. The evaluation found the performance of the system was suitable for real-time collaboration. Remote and local users were also found to have similar satisfaction with the system, experiencing high levels of presence, social presence and tele-presence. Shared cinematic and remote presentations are suggested as possible applications to guide further development of the system.</p>


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


Author(s):  
Panagiotis Antoniou ◽  
George Arfaras ◽  
Niki Pandria ◽  
George Ntakakis ◽  
Emmanuil Bambatsikos ◽  
...  

2017 ◽  
Vol 2 (3) ◽  
pp. 103
Author(s):  
Uwe Rieger

<p>With the current exponential growth in the sector of Spatial Data Technology and Mixed Reality display devises we experience an increasing overlap of the physical and digital world. Next to making data spatially visible the attempt is to connect digital information with physical properties. Over the past years a number of research institutions have been laying the ground for these developments. In contemporary architecture architectural design the dominant application of data technology is connected to graphical presentation, form finding and digital fabrication.<br />The <em>arc/sec Lab for Digital Spatial Operations </em>at the University of Auckland takes a further step. The Lab explores concepts for a new condition of buildings and urban patterns in which digital information is connected with spatial appearance and linked to material properties. The approach focuses on the step beyond digital re-presentation and digital fabrication, where data is re-connected to the multi-sensory human perceptions and physical skills. The work at the Lab is conducted in a cross disciplinary design environment and based on experiential investigations. The arc/sec Lab utilizes large-scale interactive installations as the driving vehicle for the exploration and communication of new dimensions in architectural space. The experiments are aiming to make data “touchable” and to demonstrate real time responsive environments. In parallel they are the starting point for both the development of practice oriented applications and speculation on how our cities and buildings might change in the future.<br />The article gives an overview of the current experiments being undertaken at the arc/sec Lab. It discusses how digital technologies allow for innovation between the disciplines by introducing real time adaptive behaviours to our build environment and it speculates on the type of spaces we can construct when <em>digital matter </em>is used as a new dynamic building material.</p>


2019 ◽  
Vol 5 ◽  
Author(s):  
Konstantinos Kotis

ARTIST is a research approach introducing novel methods for real-time multi-entity interaction between human and non-human entities, to create reusable and optimized Mixed Reality (MR) experiences with low-effort, towards a Shared MR Experiences Ecosystem (SMRE2). As a result, ARTIST delivers high quality MR experiences, facilitating the interaction between a variety of entities which interact in a virtual and symbiotic way within a mega, virtual and fully-experiential world. Specifically, ARTIST aims to develop novel methods for low-effort (code-free) implementation and deployment of open and reusable MR content, applications and tools, introducing the novel concept of an Experience as a Trajectory (EaaT). In addition, ARTIST will provide tools for the tracking, monitoring and analysis of user behaviour and their interaction with the environment and with other users, towards optimizing MR experiences by recommending their reconfiguration, dynamically (at run-time) or statically (at development time). Finally, it will provide tools for synthesizing experiences into new mega and still reconfigurable EaaTs, enhancing them at the same time using semantically integrated related data/information available in disparate and heterogeneous resources.


Sign in / Sign up

Export Citation Format

Share Document