scholarly journals A Robust Real-Time 3D Reconstruction Method for Mixed Reality Telepresence

2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Fazliaty Edora Fadzli ◽  
Ajune Wanis Ismail

Mixed Reality (MR) is a technology which enable to bring a virtual element into the real-world environment. MR intends to improve reality on the virtual world immerse onto real-world space. Occasionally the MR has been improved as the display technologies advanced progressively. In MR collaborative interface context, the local and remote user work together on collaborative task while sense the immersive environment in the cooperative application. User telepresence is an immersive telepresence, where the reconstruction of a human appears in a real-life. Up till now, producing full telepresence of the life-size human body may require a high transmission bandwidth of the internet. Therefore, this paper explores on a robust real-time 3D reconstruction method for MR telepresence. This paper discusses the previous works on the reconstruction method of a full-body human and the existing research works that have proposed the reconstruction methods for telepresence. Besides the 3D reconstruction method, this paper also enlightens our recent finding on the MR framework to transport a full-body human from a local location to a remote location. The MR telepresence will be discussed, as well as the robust 3D reconstruction method which has been implemented with user telepresence feature where the user experiences an accurate 3D representation of a remote person. The paper ends with the discussion and results, MR telepresence with robust 3D reconstruction method to execute user telepresence.

2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


2018 ◽  
Vol 8 (7) ◽  
pp. 1169 ◽  
Author(s):  
Ki-Baek Lee ◽  
Young-Joo Kim ◽  
Young-Dae Hong

This paper proposes a novel search method for a swarm of quadcopter drones. In the proposed method, inspired by the phenomena of swarms in nature, drones effectively look for the search target by investigating the evidence from the surroundings and communicating with each other. The position update mechanism is implemented using the particle swarm optimization algorithm as the swarm intelligence (a well-known swarm-based optimization algorithm), as well as a dynamic model for the drones to take the real-world environment into account. In addition, the mechanism is processed in real-time along with the movements of the drones. The effectiveness of the proposed method was verified through repeated test simulations, including a benchmark function optimization and air pollutant search problems. The results show that the proposed method is highly practical, accurate, and robust.


Author(s):  
Aatish Chandak ◽  
Arjun Aravind ◽  
Nithin Kamath

The methods for autonomous navigation of a robot in a real world environment is an area of interest for current researchers. Although there have been a variety of models developed, there are problems with regards to the integration of sensors for navigation in an outdoor environment like moving obstacles, sensor and component accuracy. This paper details an attempt to develop an autonomous robot prototype using only ultrasonic sensors for sensing the environment and GPS/ GSM and a digital compass for position and localization. An algorithm for the navigation based on reactive behaviour is presented. Once the robot has navigated to its final location based on remote access by the owner, it surveys the geographical region and uploads the real time images to the owner using an API that is developed for the Raspberry PI’s kernel.


Author(s):  
Xiongfeng Peng ◽  
Liaoyuan Zeng ◽  
Wenyi Wang ◽  
Zhili Liu ◽  
Yifeng Yang ◽  
...  

Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker ◽  
Sven Bilen ◽  
Janis Terpenny ◽  
Chimay Anumba

Immersive virtual reality systems have the potential to transform the manner in which designers create prototypes and collaborate in teams. Using technologies such as the Oculus Rift or the HTC Vive, a designer can attain a sense of “presence” and “immersion” typically not experienced by traditional CAD-based platforms. However, one of the fundamental challenges of creating a high quality immersive virtual reality experience is actually creating the immersive virtual reality environment itself. Typically, designers spend a considerable amount of time manually designing virtual models that replicate physical, real world artifacts. While there exists the ability to import standard 3D models into these immersive virtual reality environments, these models are typically generic in nature and do not represent the designer’s intent. To mitigate these challenges, the authors of this work propose the real time translation of physical objects into an immersive virtual reality environment using readily available RGB-D sensing systems and standard networking connections. The emergence of commercial, off-the shelf RGB-D sensing systems such as the Microsoft Kinect, have enabled the rapid 3D reconstruction of physical environments. The authors present a methodology that employs 3D mesh reconstruction algorithms and real time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual realilty environment with which the user can then interact. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed methodology.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Ziang Lei

3D reconstruction techniques for animated images and animation techniques for faces are important research in computer graphics-related fields. Traditional 3D reconstruction techniques for animated images mainly rely on expensive 3D scanning equipment and a lot of time-consuming postprocessing manually and require the scanned animated subject to remain in a fixed pose for a considerable period. In recent years, the development of large-scale computing power of computer-related hardware, especially distributed computing, has made it possible to come up with a real-time and efficient solution. In this paper, we propose a 3D reconstruction method for multivisual animated images based on Poisson’s equation theory. The calibration theory is used to calibrate the multivisual animated images, obtain the internal and external parameters of the camera calibration module, extract the feature points from the animated images of each viewpoint by using the corner point detection operator, then match and correct the extracted feature points by using the least square median method, and complete the 3D reconstruction of the multivisual animated images. The experimental results show that the proposed method can obtain the 3D reconstruction results of multivisual animation images quickly and accurately and has certain real-time and reliability.


2017 ◽  
Vol 29 (4) ◽  
pp. 649-659 ◽  
Author(s):  
Ryohsuke Mitsudome ◽  
◽  
Hisashi Date ◽  
Azumi Suzuki ◽  
Takashi Tsubouchi ◽  
...  

In order for a robot to provide service in a real world environment, it has to navigate safely and recognize the surroundings. We have participated in Tsukuba Challenge to develop a robot with robust navigation and accurate object recognition capabilities. To achieve navigation, we have introduced the ROS packages, and the robot was able to navigate without major collisions throughout the challenge. For object recognition, we used both a laser scanner and camera to recognize a person in specific clothing, in real time and with high accuracy. In this paper, we evaluate the accuracy of recognition and discuss how it can be improved.


Sign in / Sign up

Export Citation Format

Share Document