Experiencing interior environments: New approaches for the immersive display of large-scale point cloud data

Author(s):  
Ross Tredinnick ◽  
Markus Broecker ◽  
Kevin Ponto
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Baoyun Guo ◽  
Jiawen Wang ◽  
Xiaobin Jiang ◽  
Cailin Li ◽  
Benya Su ◽  
...  

Due to the memory limitation and lack of computing power of consumer level computers, there is a need for suitable methods to achieve 3D surface reconstruction of large-scale point cloud data. A method based on the idea of divide and conquer approaches is proposed. Firstly, the kd-tree index was created for the point cloud data. Then, the Delaunay triangulation algorithm of multicore parallel computing was used to construct the point cloud data in the leaf nodes. Finally, the complete 3D mesh model was realized by constrained Delaunay tetrahedralization based on piecewise linear system and graph cut. The proposed method performed surface reconstruction on the point cloud in the multicore parallel computing architecture, in which memory release and reallocation were implemented to reduce the memory occupation and improve the running efficiency while ensuring the quality of the triangular mesh. The proposed algorithm was compared with two classical surface reconstruction algorithms using multigroup point cloud data, and the applicability experiment of the algorithm was carried out; the results verify the effectiveness and practicability of the proposed approach.


Author(s):  
K. Liu ◽  
J. Boehm

Point cloud data plays an significant role in various geospatial applications as it conveys plentiful information which can be used for different types of analysis. Semantic analysis, which is an important one of them, aims to label points as different categories. In machine learning, the problem is called classification. In addition, processing point data is becoming more and more challenging due to the growing data volume. In this paper, we address point data classification in a big data context. The popular cluster computing framework Apache Spark is used through the experiments and the promising results suggests a great potential of Apache Spark for large-scale point data processing.


2019 ◽  
Vol 8 (8) ◽  
pp. 343 ◽  
Author(s):  
Li ◽  
Hasegawa ◽  
Nii ◽  
Tanaka

Digital archiving of three-dimensional cultural heritage assets has increased the demand for visualization of large-scale point clouds of cultural heritage assets acquired by laser scanning. We proposed a fused transparent visualization method that visualizes a point cloud of a cultural heritage asset in an environment using a photographic image as the background. We also proposed lightness adjustment and color enhancement methods to deal with the reduced visibility caused by the fused visualization. We applied the proposed method to a laser-scanned point cloud of a high-valued cultural festival float with complex inner and outer structures. Experimental results demonstrate that the proposed method enables high-quality transparent visualization of the cultural asset in its surrounding environment.


2021 ◽  
Vol 13 (13) ◽  
pp. 2476
Author(s):  
Hiroshi Masuda ◽  
Yuichiro Hiraoka ◽  
Kazuto Saito ◽  
Shinsuke Eto ◽  
Michinari Matsushita ◽  
...  

With the use of terrestrial laser scanning (TLS) in forest stands, surveys are now equipped to obtain dense point cloud data. However, the data range, i.e., the number of points, often reaches the billions or even higher, exceeding random access memory (RAM) limits on common computers. Moreover, the processing time often also extends beyond acceptable processing lengths. Thus, in this paper, we present a new method of efficiently extracting stem traits from huge point cloud data obtained by TLS, without subdividing or downsampling the point clouds. In this method, each point cloud is converted into a wireframe model by connecting neighboring points on the same continuous surface, and three-dimensional points on stems are resampled as cross-sectional points of the wireframe model in an out-of-core manner. Since the data size of the section points is much smaller than the original point clouds, stem traits can be calculated from the section points on a common computer. With the study method, 1381 tree stems were calculated from 3.6 billion points in ~20 min on a common computer. To evaluate the accuracy of this method, eight targeted trees were cut down and sliced at 1-m intervals; actual stem traits were then compared to those calculated from point clouds. The experimental results showed that the efficiency and accuracy of the proposed method are sufficient for practical use in various fields, including forest management and forest research.


2021 ◽  
Vol 87 (7) ◽  
pp. 479-484
Author(s):  
Yu Hou ◽  
Ruifeng Zhai ◽  
Xueyan Li ◽  
Junfeng Song ◽  
Xuehan Ma ◽  
...  

Three-dimensional reconstruction from a single image has excellent future prospects. The use of neural networks for three-dimensional reconstruction has achieved remarkable results. Most of the current point-cloud-based three-dimensional reconstruction networks are trained using nonreal data sets and do not have good generalizability. Based on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago ()data set of large-scale scenes, this article proposes a method for processing real data sets. The data set produced in this work can better train our network model and realize point cloud reconstruction based on a single picture of the real world. Finally, the constructed point cloud data correspond well to the corresponding three-dimensional shapes, and to a certain extent, the disadvantage of the uneven distribution of the point cloud data obtained by light detection and ranging scanning is overcome using the proposed method.


Sign in / Sign up

Export Citation Format

Share Document