scholarly journals A PARALLEL ALGORITHM FOR LOCAL POINT DENSITY INDEX COMPUTATION OF LARGE POINT CLOUDS

Author(s):  
A. V. Vo ◽  
C. N. Lokugam Hewage ◽  
N. A. Le Khac ◽  
M. Bertolotto ◽  
D. Laefer

Abstract. Point density is an important property that dictates the usability of a point cloud data set. This paper introduces an efficient, scalable, parallel algorithm for computing the local point density index, a sophisticated point cloud density metric. Computing the local point density index is non-trivial, because this computation involves a neighbour search that is required for each, individual point in the potentially large, input point cloud. Most existing algorithms and software are incapable of computing point density at scale. Therefore, the algorithm introduced in this paper aims to address both the needed computational efficiency and scalability for considering this factor in large, modern point clouds such as those collected in national or regional scans. The proposed algorithm is composed of two stages. In stage 1, a point-level, parallel processing step is performed to partition an unstructured input point cloud into partially overlapping, buffered tiles. A buffer is provided around each tile so that the data partitioning does not introduce spatial discontinuity into the final results. In stage 2, the buffered tiles are distributed to different processors for computing the local point density index in parallel. That tile-level parallel processing step is performed using a conventional algorithm with an R-tree data structure. While straight-forward, the proposed algorithm is efficient and particularly suitable for processing large point clouds. Experiments conducted using a 1.4 billion point data set acquired over part of Dublin, Ireland demonstrated an efficiency factor of up to 14.8/16. More specifically, the computational time was reduced by 14.8 times when the number of processes (i.e. executors) increased by 16 times. Computing the local point density index for the 1.4 billion point data set took just over 5 minutes with 16 executors and 8 cores per executor. The reduction in computational time was nearly 70 times compared to the 6 hours required without parallelism.

Author(s):  
P. Hu ◽  
Y. Liu ◽  
M. Tian ◽  
M. Hou

Abstract. Plane segmentation from the point cloud is an important step in various types of geo-information related to human activities. In this paper, we present a new approach to accurate segment planar primitives simultaneously by transforming it into the best matching issue between the over-segmented super-voxels and the 3D plane models. The super-voxels and its adjacent topological graph are firstly derived from the input point cloud as over-segmented small patches. Such initial 3D plane models are then enriched by fitting centroids of randomly sampled super-voxels, and translating these grouped planar super-voxels by structured scene prior (e.g. orthogonality, parallelism), while the generated adjacent graph will be updated along with planar clustering. To achieve the final super-voxels to planes assignment problem, an energy minimization framework is constructed using the productions of candidate planes, initial super-voxels, and the improved adjacent graph, and optimized to segment multiple consistent planar surfaces in the scenes simultaneously. The proposed algorithms are implemented, and three types of point clouds differing in feature characteristics (e.g. point density, complexity) are mainly tested to validate the efficiency and effectiveness of our segmentation method.


2021 ◽  
Vol 11 (5) ◽  
pp. 2268
Author(s):  
Erika Straková ◽  
Dalibor Lukáš ◽  
Zdenko Bobovský ◽  
Tomáš Kot ◽  
Milan Mihola ◽  
...  

While repairing industrial machines or vehicles, recognition of components is a critical and time-consuming task for a human. In this paper, we propose to automatize this task. We start with a Principal Component Analysis (PCA), which fits the scanned point cloud with an ellipsoid by computing the eigenvalues and eigenvectors of a 3-by-3 covariant matrix. In case there is a dominant eigenvalue, the point cloud is decomposed into two clusters to which the PCA is applied recursively. In case the matching is not unique, we continue to distinguish among several candidates. We decompose the point cloud into planar and cylindrical primitives and assign mutual features such as distance or angle to them. Finally, we refine the matching by comparing the matrices of mutual features of the primitives. This is a more computationally demanding but very robust method. We demonstrate the efficiency and robustness of the proposed methodology on a collection of 29 real scans and a database of 389 STL (Standard Triangle Language) models. As many as 27 scans are uniquely matched to their counterparts from the database, while in the remaining two cases, there is only one additional candidate besides the correct model. The overall computational time is about 10 min in MATLAB.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Signals ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 159-173
Author(s):  
Simone Fontana ◽  
Domenico Giorgio Sorrenti

Probabilistic Point Clouds Registration (PPCR) is an algorithm that, in its multi-iteration version, outperformed state-of-the-art algorithms for local point clouds registration. However, its performances have been tested using a fixed high number of iterations. To be of practical usefulness, we think that the algorithm should decide by itself when to stop, on one hand to avoid an excessive number of iterations and waste computational time, on the other to avoid getting a sub-optimal registration. With this work, we compare different termination criteria on several datasets, and prove that the chosen one produces very good results that are comparable to those obtained using a very large number of iterations, while saving computational time.


Author(s):  
M. Lemmens

<p><strong>Abstract.</strong> A knowledge-based system exploits the knowledge, which a human expert uses for completing a complex task, through a database containing decision rules, and an inference engine. Already in the early nineties knowledge-based systems have been proposed for automated image classification. Lack of success faded out initial interest and enthusiasm, the same fate neural networks struck at that time. Today the latter enjoy a steady revival. This paper aims at demonstrating that a knowledge-based approach to automated classification of mobile laser scanning point clouds has promising prospects. An initial experiment exploiting only two features, height and reflectance value, resulted in an overall accuracy of 79<span class="thinspace"></span>% for the Paris-rue-Madame point cloud bench mark data set.</p>


2020 ◽  
pp. 002029402091986
Author(s):  
Xiaocui Yuan ◽  
Huawei Chen ◽  
Baoling Liu

Clustering analysis is one of the most important techniques in point cloud processing, such as registration, segmentation, and outlier detection. However, most of the existing clustering algorithms exhibit a low computational efficiency with the high demand for computational resources, especially for large data processing. Sometimes, clusters and outliers are inseparable, especially for those point clouds with outliers. Most of the cluster-based algorithms can well identify cluster outliers but sparse outliers. We develop a novel clustering method, called spatial neighborhood connected region labeling. The method defines spatial connectivity criterion, finds points connections based on the connectivity criterion among the k-nearest neighborhood region and classifies connected points to the same cluster. Our method can accurately and quickly classify datasets using only one parameter k. Comparing with K-means, hierarchical clustering and density-based spatial clustering of applications with noise methods, our method provides better accuracy using less computational time for data clustering. For applications in the outlier detection of the point cloud, our method can identify not only cluster outliers, but also sparse outliers. More accurate detection results are achieved compared to the state-of-art outlier detection methods, such as local outlier factor and density-based spatial clustering of applications with noise.


2022 ◽  
Vol 14 (2) ◽  
pp. 367
Author(s):  
Zhen Zheng ◽  
Bingting Zha ◽  
Yu Zhou ◽  
Jinbo Huang ◽  
Youshi Xuchen ◽  
...  

This paper proposes a single-stage adaptive multi-scale noise filtering algorithm for point clouds, based on feature information, which aims to mitigate the fact that the current laser point cloud noise filtering algorithm has difficulty quickly completing the single-stage adaptive filtering of multi-scale noise. The feature information from each point of the point cloud is obtained based on the efficient k-dimensional (k-d) tree data structure and amended normal vector estimation methods, and the adaptive threshold is used to divide the point cloud into large-scale noise, a feature-rich region, and a flat region to reduce the computational time. The large-scale noise is removed directly, the feature-rich and flat regions are filtered via improved bilateral filtering algorithm and weighted average filtering algorithm based on grey relational analysis, respectively. Simulation results show that the proposed algorithm performs better than the state-of-art comparison algorithms. It was, thus, verified that the algorithm proposed in this paper can quickly and adaptively (i) filter out large-scale noise, (ii) smooth small-scale noise, and (iii) effectively maintain the geometric features of the point cloud. The developed algorithm provides research thought for filtering pre-processing methods applicable in 3D measurements, remote sensing, and target recognition based on point clouds.


Author(s):  
Z. Lari ◽  
K. Al-Durgham ◽  
A. Habib

Terrestrial laser scanning (TLS) systems have been established as a leading tool for the acquisition of high density three-dimensional point clouds from physical objects. The collected point clouds by these systems can be utilized for a wide spectrum of object extraction, modelling, and monitoring applications. Pole-like features are among the most important objects that can be extracted from TLS data especially those acquired in urban areas and industrial sites. However, these features cannot be completely extracted and modelled using a single TLS scan due to significant local point density variations and occlusions caused by the other objects. Therefore, multiple TLS scans from different perspectives should be integrated through a registration procedure to provide a complete coverage of the pole-like features in a scene. To date, different segmentation approaches have been proposed for the extraction of pole-like features from either single or multiple-registered TLS scans. These approaches do not consider the internal characteristics of a TLS point cloud (local point density variations and noise level in data) and usually suffer from computational inefficiency. To overcome these problems, two recently-developed PCA-based parameter-domain and spatial-domain approaches for the segmentation of pole-like features are introduced, in this paper. Moreover, the performance of the proposed segmentation approaches for the extraction of pole-like features from a single or multiple-registered TLS scans is investigated in this paper. The alignment of the utilized TLS scans is implemented using an Iterative Closest Projected Point (ICPP) registration procedure. Qualitative and quantitative evaluation of the extracted pole-like features from single and multiple-registered TLS scans, using both of the proposed segmentation approaches, is conducted to verify the extraction of more complete pole-like features using multipleregistered TLS scans.


Author(s):  
P. Tutzauer ◽  
N. Haala

This paper aims at façade reconstruction for subsequent enrichment of LOD2 building models. We use point clouds from dense image matching with imagery both from Mobile Mapping systems and oblique airborne cameras. The interpretation of façade structures is based on a geometric reconstruction. For this purpose a pre-segmentation of the point cloud into façade points and non-façade points is necessary. We present an approach for point clouds with limited geometric accuracy where a geometric segmentation might fail. Our contribution is a radiometric segmentation approach. Via local point features, based on a clustering in hue space, the point cloud is segmented into façade-points and non-façade points. This way, the initial geometric reconstruction step can be bypassed and point clouds with limited accuracy can still serve as input for the façade reconstruction and modelling approach.


Author(s):  
G. Tran ◽  
D. Nguyen ◽  
M. Milenkovic ◽  
N. Pfeifer

Full-waveform (FWF) LiDAR (Light Detection and Ranging) systems have their advantage in recording the entire backscattered signal of each emitted laser pulse compared to conventional airborne discrete-return laser scanner systems. The FWF systems can provide point clouds which contain extra attributes like amplitude and echo width, etc. In this study, a FWF data collected in 2010 for Eisenstadt, a city in the eastern part of Austria was used to classify four main classes: buildings, trees, waterbody and ground by employing a decision tree. Point density, echo ratio, echo width, normalised digital surface model and point cloud roughness are the main inputs for classification. The accuracy of the final results, correctness and completeness measures, were assessed by comparison of the classified output to a knowledge-based labelling of the points. Completeness and correctness between 90% and 97% was reached, depending on the class. While such results and methods were presented before, we are investigating additionally the transferability of the classification method (features, thresholds …) to another urban FWF lidar point cloud. Our conclusions are that from the features used, only echo width requires new thresholds. A data-driven adaptation of thresholds is suggested.


Sign in / Sign up

Export Citation Format

Share Document