Fast Loading Algorithm for Large Scale Point Cloud Data File

2013 ◽  
Vol 8 (8) ◽  
pp. 930-937
Author(s):  
Jiansheng Zhang
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Baoyun Guo ◽  
Jiawen Wang ◽  
Xiaobin Jiang ◽  
Cailin Li ◽  
Benya Su ◽  
...  

Due to the memory limitation and lack of computing power of consumer level computers, there is a need for suitable methods to achieve 3D surface reconstruction of large-scale point cloud data. A method based on the idea of divide and conquer approaches is proposed. Firstly, the kd-tree index was created for the point cloud data. Then, the Delaunay triangulation algorithm of multicore parallel computing was used to construct the point cloud data in the leaf nodes. Finally, the complete 3D mesh model was realized by constrained Delaunay tetrahedralization based on piecewise linear system and graph cut. The proposed method performed surface reconstruction on the point cloud in the multicore parallel computing architecture, in which memory release and reallocation were implemented to reduce the memory occupation and improve the running efficiency while ensuring the quality of the triangular mesh. The proposed algorithm was compared with two classical surface reconstruction algorithms using multigroup point cloud data, and the applicability experiment of the algorithm was carried out; the results verify the effectiveness and practicability of the proposed approach.


Author(s):  
K. Liu ◽  
J. Boehm

Point cloud data plays an significant role in various geospatial applications as it conveys plentiful information which can be used for different types of analysis. Semantic analysis, which is an important one of them, aims to label points as different categories. In machine learning, the problem is called classification. In addition, processing point data is becoming more and more challenging due to the growing data volume. In this paper, we address point data classification in a big data context. The popular cluster computing framework Apache Spark is used through the experiments and the promising results suggests a great potential of Apache Spark for large-scale point data processing.


2019 ◽  
Vol 8 (8) ◽  
pp. 343 ◽  
Author(s):  
Li ◽  
Hasegawa ◽  
Nii ◽  
Tanaka

Digital archiving of three-dimensional cultural heritage assets has increased the demand for visualization of large-scale point clouds of cultural heritage assets acquired by laser scanning. We proposed a fused transparent visualization method that visualizes a point cloud of a cultural heritage asset in an environment using a photographic image as the background. We also proposed lightness adjustment and color enhancement methods to deal with the reduced visibility caused by the fused visualization. We applied the proposed method to a laser-scanned point cloud of a high-valued cultural festival float with complex inner and outer structures. Experimental results demonstrate that the proposed method enables high-quality transparent visualization of the cultural asset in its surrounding environment.


Author(s):  
Avar Almukhtar ◽  
Henry Abanda ◽  
Zaid O. Saeed ◽  
Joseph H.M. Tah

The urgent need to improve performance in the construction industry has led to the adoption of many innovative technologies. 3D laser scanners are amongst the leading technologies being used to capture and process assets or construction project data for use in various applications. Due to its nascent nature, many questions are still unanswered about 3D laser scanning, which in turn contribute to the slow adaptation of the technology. Some of these include the role of 3D laser scanners in capturing and processing raw construction project data. How accurate is the 3D laser scanner or point cloud data? How does laser scanning fit with other wider emerging technologies such as Building Information Modelling (BIM)? This study adopts a proof-of-concept approach, which in addition to answering the afore-mentioned questions, illustrates the application of the technology in practice. The study finds that the quality of the data, commonly referred to as point cloud data is still a major issue as it depends on the distance between the target object and 3D laser scanner’s station. Additionally, the quality of the data is still very dependent on data file sizes and the computational power of the processing machine. Lastly, the connection between laser scanning and BIM approaches is still weak as what can be done with a point cloud data model in a BIM environment is still very limited. The aforementioned findings reinforce existing views on the use of 3D laser scanners in capturing and processing construction project data.


2021 ◽  
Vol 13 (13) ◽  
pp. 2476
Author(s):  
Hiroshi Masuda ◽  
Yuichiro Hiraoka ◽  
Kazuto Saito ◽  
Shinsuke Eto ◽  
Michinari Matsushita ◽  
...  

With the use of terrestrial laser scanning (TLS) in forest stands, surveys are now equipped to obtain dense point cloud data. However, the data range, i.e., the number of points, often reaches the billions or even higher, exceeding random access memory (RAM) limits on common computers. Moreover, the processing time often also extends beyond acceptable processing lengths. Thus, in this paper, we present a new method of efficiently extracting stem traits from huge point cloud data obtained by TLS, without subdividing or downsampling the point clouds. In this method, each point cloud is converted into a wireframe model by connecting neighboring points on the same continuous surface, and three-dimensional points on stems are resampled as cross-sectional points of the wireframe model in an out-of-core manner. Since the data size of the section points is much smaller than the original point clouds, stem traits can be calculated from the section points on a common computer. With the study method, 1381 tree stems were calculated from 3.6 billion points in ~20 min on a common computer. To evaluate the accuracy of this method, eight targeted trees were cut down and sliced at 1-m intervals; actual stem traits were then compared to those calculated from point clouds. The experimental results showed that the efficiency and accuracy of the proposed method are sufficient for practical use in various fields, including forest management and forest research.


CivilEng ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 214-235
Author(s):  
Avar Almukhtar ◽  
Zaid O. Saeed ◽  
Henry Abanda ◽  
Joseph H.M. Tah

The urgent need to improve performance in the construction industry has led to the adoption of many innovative technologies. 3D laser scanners are amongst the leading technologies being used to capture and process assets or construction project data for use in various applications. Due to its nascent nature, many questions are still unanswered about 3D laser scanning, which in turn contribute to the slow adaptation of the technology. Some of these include the role of 3D laser scanners in capturing and processing raw construction project data. How accurate are the 3D laser scanner or point cloud data? How does laser scanning fit with other wider emerging technologies such as building information modeling (BIM)? This study adopts a proof-of-concept approach, which in addition to answering the aforementioned questions, illustrates the application of the technology in practice. The study finds that the quality of the data, commonly referred to as point cloud data, is still a major issue as it depends on the distance between the target object and 3D laser scanner’s station. Additionally, the quality of the data is still very dependent on data file sizes and the computational power of the processing machine. Lastly, the connection between laser scanning and BIM approaches is still weak as what can be done with a point cloud data model in a BIM environment is still very limited. The aforementioned findings reinforce existing views on the use of 3D laser scanners in capturing and processing construction project data.


Sign in / Sign up

Export Citation Format

Share Document