Real-time mixed-reality telepresence via 3D reconstruction with HoloLens and commodity depth sensors

Author(s):  
Michal Joachimczak ◽  
Juan Liu ◽  
Hiroshi Ando
2019 ◽  
Vol 13 (03) ◽  
pp. 311-328
Author(s):  
Michał Joachimczak ◽  
Juan Liu ◽  
Hiroshi Ando

We study how mixed reality (MR) telepresence can enhance long-distance human interaction and how altering 3D representations of a remote person can be used to modulate stress and anxiety during social interactions. To do so, we developed an MR telepresence system employing commodity depth sensors and Microsoft’s Hololens. A textured, polygonal 3D model of a person was reconstructed in real time and transmitted over network for rendering in remote location using HoloLens. In this study, we used mock job interview paradigm to induce stress in human–subjects interacting with an interviewer presented as an MR hologram. Participants were exposed to three different types of real-time reconstructed virtual holograms of the interviewer, a natural-sized 3D reconstruction (NR), a miniature 3D reconstruction (SR) and a 2D-display representation (LCD). Participants reported their subjective experience through questionnaires, while their biophysical responses were recorded. We found that the size of 3D representation of a remote interviewer had a significant effect on participants’ stress levels and their sense of presence. The questionnaire data showed that NR condition induced more stress and presence than SR condition and was significantly different from LCD condition. We also found consistent patterns in the biophysical data.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Fazliaty Edora Fadzli ◽  
Ajune Wanis Ismail

Mixed Reality (MR) is a technology which enable to bring a virtual element into the real-world environment. MR intends to improve reality on the virtual world immerse onto real-world space. Occasionally the MR has been improved as the display technologies advanced progressively. In MR collaborative interface context, the local and remote user work together on collaborative task while sense the immersive environment in the cooperative application. User telepresence is an immersive telepresence, where the reconstruction of a human appears in a real-life. Up till now, producing full telepresence of the life-size human body may require a high transmission bandwidth of the internet. Therefore, this paper explores on a robust real-time 3D reconstruction method for MR telepresence. This paper discusses the previous works on the reconstruction method of a full-body human and the existing research works that have proposed the reconstruction methods for telepresence. Besides the 3D reconstruction method, this paper also enlightens our recent finding on the MR framework to transport a full-body human from a local location to a remote location. The MR telepresence will be discussed, as well as the robust 3D reconstruction method which has been implemented with user telepresence feature where the user experiences an accurate 3D representation of a remote person. The paper ends with the discussion and results, MR telepresence with robust 3D reconstruction method to execute user telepresence.


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


2016 ◽  
Vol 153 ◽  
pp. 37-54 ◽  
Author(s):  
Antonio Agudo ◽  
Francesc Moreno-Noguer ◽  
Begoña Calvo ◽  
J.M.M. Montiel

Sign in / Sign up

Export Citation Format

Share Document