BridgedReality: A Toolkit Connecting Physical and Virtual Spaces through Live Holographic Point Cloud Interaction

2021 ◽  
Author(s):  
Mark Armstrong ◽  
Lawrence Quest ◽  
Yun Suen Pai ◽  
Kai Kunze ◽  
Kouta Minamizawa
Keyword(s):  
Author(s):  
Zhou Zhang ◽  
Mingshao Zhang ◽  
Yizhe Chang ◽  
Sven K. Esche ◽  
Constantin Chassapis

A virtual space (VS) is an indispensable component of a virtual environment (VE) in virtual reality (VR). Usually, it is created using general tools and skills that are independent of the users’ specific applications and intents. Creating a VS by surveying the real world with traditional measuring tools or creating virtual features with CAD software involves many steps and thus is time consuming and complicated. This renders the construction of VEs difficult, impairs their flexibility and hampers their widespread usage. In this paper, an efficient method for creating VSs with a handheld camera is introduced. In this approach, the camera is used as a measuring tool that scans the real scene and obtains the corresponding surface information. This information is then used to generate a virtual 3D model through a series of data processing procedures. Firstly, the camera’s pose is traced in order to locate the points of the scene’s surface, whereby these surface points form a point cloud. Then, this point cloud is meshed and the mesh elements are textured automatically one by one. Unfortunately, the virtual 3D model resulting from this procedure represents an impenetrable solid and thus collision detection would prevent the avatars from entering into this VS. Therefore, an approach for eliminating this restriction is proposed here. Finally, a game-based virtual laboratory (GBVL) for an undergraduate mechanical engineering class was developed to demonstrate the feasibility of the proposed methodology. The model format used in Garry’s Mod (GMod) is also found in other VEs, and therefore the method proposed here can be straightforwardly generalized to other VE implementations.


2016 ◽  
Vol 136 (8) ◽  
pp. 1078-1084
Author(s):  
Shoichi Takei ◽  
Shuichi Akizuki ◽  
Manabu Hashimoto

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2020 ◽  
Vol 28 (7) ◽  
pp. 1618-1625
Author(s):  
Fu-qun ZHAO ◽  
◽  
Keyword(s):  

2014 ◽  
Vol 24 (3) ◽  
pp. 651-662
Author(s):  
Feng ZENG ◽  
Tong YANG ◽  
Shan YAO

2018 ◽  
Vol 30 (4) ◽  
pp. 642
Author(s):  
Guichao Lin ◽  
Yunchao Tang ◽  
Xiangjun Zou ◽  
Qing Zhang ◽  
Xiaojie Shi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document