scholarly journals A Termination Criterion for Probabilistic Point Clouds Registration

Signals ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 159-173
Author(s):  
Simone Fontana ◽  
Domenico Giorgio Sorrenti

Probabilistic Point Clouds Registration (PPCR) is an algorithm that, in its multi-iteration version, outperformed state-of-the-art algorithms for local point clouds registration. However, its performances have been tested using a fixed high number of iterations. To be of practical usefulness, we think that the algorithm should decide by itself when to stop, on one hand to avoid an excessive number of iterations and waste computational time, on the other to avoid getting a sub-optimal registration. With this work, we compare different termination criteria on several datasets, and prove that the chosen one produces very good results that are comparable to those obtained using a very large number of iterations, while saving computational time.

2019 ◽  
Vol 11 (21) ◽  
pp. 2465 ◽  
Author(s):  
Jakub Walczak ◽  
Tadeusz Poreda ◽  
Adam Wojciechowski

Point cloud segmentation for planar surface detection is a valid problem of automatic laser scans analysis. It is widely exploited for many industrial remote sensing tasks, such as LIDAR city scanning, creating inventories of buildings, or object reconstruction. Many current methods rely on robustly calculated covariance and centroid for plane model estimation or global energy optimization. This is coupled with point cloud division strategies, based on uniform or regular space subdivision. These approaches result in many redundant divisions, plane maladjustments caused by outliers, and excessive number of processing iterations. In this paper, a new robust method of point clouds segmentation, based on histogram-driven hierarchical space division, inspired by kd-tree is presented. The proposed partition method produces results with a smaller oversegmentation rate. Moreover, state-of-the-art partitions often lead to nodes of low cardinality, which results in the rejection of many points. In the proposed method, the point rejection rate was reduced. Point cloud subdivision is followed by resilient plane estimation, using Mahalanobis distance with respect to seven cardinal points. These points were established based on eigenvectors of the covariance matrix of the considered point cluster. The proposed method shows high robustness and yields good quality metrics, much faster than a FAST-MCD approach. The overall results indicate improvements in terms of plane precision, plane recall, under-, and the over- segmentation rate with respect to the reference benchmark methods. Plane precision for the S3DIS dataset increased on average by 2.6pp and plane recall- by 3pp. Both over- and under- segmentation rates fell by 3.2pp and 4.3pp.


Author(s):  
A. V. Vo ◽  
C. N. Lokugam Hewage ◽  
N. A. Le Khac ◽  
M. Bertolotto ◽  
D. Laefer

Abstract. Point density is an important property that dictates the usability of a point cloud data set. This paper introduces an efficient, scalable, parallel algorithm for computing the local point density index, a sophisticated point cloud density metric. Computing the local point density index is non-trivial, because this computation involves a neighbour search that is required for each, individual point in the potentially large, input point cloud. Most existing algorithms and software are incapable of computing point density at scale. Therefore, the algorithm introduced in this paper aims to address both the needed computational efficiency and scalability for considering this factor in large, modern point clouds such as those collected in national or regional scans. The proposed algorithm is composed of two stages. In stage 1, a point-level, parallel processing step is performed to partition an unstructured input point cloud into partially overlapping, buffered tiles. A buffer is provided around each tile so that the data partitioning does not introduce spatial discontinuity into the final results. In stage 2, the buffered tiles are distributed to different processors for computing the local point density index in parallel. That tile-level parallel processing step is performed using a conventional algorithm with an R-tree data structure. While straight-forward, the proposed algorithm is efficient and particularly suitable for processing large point clouds. Experiments conducted using a 1.4 billion point data set acquired over part of Dublin, Ireland demonstrated an efficiency factor of up to 14.8/16. More specifically, the computational time was reduced by 14.8 times when the number of processes (i.e. executors) increased by 16 times. Computing the local point density index for the 1.4 billion point data set took just over 5 minutes with 16 executors and 8 cores per executor. The reduction in computational time was nearly 70 times compared to the 6 hours required without parallelism.


2021 ◽  
Vol 12 (25) ◽  
pp. 73
Author(s):  
Francesca Matrone ◽  
Massimo Martini

<p class="VARAbstract">The growing availability of three-dimensional (3D) data, such as point clouds, coming from Light Detection and Ranging (LiDAR), Mobile Mapping Systems (MMSs) or Unmanned Aerial Vehicles (UAVs), provides the opportunity to rapidly generate 3D models to support the restoration, conservation, and safeguarding activities of cultural heritage (CH). The so-called scan-to-BIM process can, in fact, benefit from such data, and they can themselves be a source for further analyses or activities on the archaeological and built heritage. There are several ways to exploit this type of data, such as Historic Building Information Modelling (HBIM), mesh creation, rasterisation, classification, and semantic segmentation. The latter, referring to point clouds, is a trending topic not only in the CH domain but also in other fields like autonomous navigation, medicine or retail. Precisely in these sectors, the task of semantic segmentation has been mainly exploited and developed with artificial intelligence techniques. In particular, machine learning (ML) algorithms, and their deep learning (DL) subset, are increasingly applied and have established a solid state-of-the-art in the last half-decade. However, applications of DL techniques on heritage point clouds are still scarce; therefore, we propose to tackle this framework within the built heritage field. Starting from some previous tests with the Dynamic Graph Convolutional Neural Network (DGCNN), in this contribution close attention is paid to: i) the investigation of fine-tuned models, used as a transfer learning technique, ii) the combination of external classifiers, such as Random Forest (RF), with the artificial neural network, and iii) the evaluation of the data augmentation results for the domain-specific ArCH dataset. Finally, after taking into account the main advantages and criticalities, considerations are made on the possibility to profit by this methodology also for non-programming or domain experts.</p><p>Highlights:</p><ul><li><p>Semantic segmentation of built heritage point clouds through deep neural networks can provide performances comparable to those of more consolidated state-of-the-art ML classifiers.</p></li><li><p>Transfer learning approaches, as fine-tuning, can considerably reduce computational time also for CH domain-specific datasets, as well as improve metrics for some challenging categories (i.e. windows or mouldings).</p></li><li><p>Data augmentation techniques do not significantly improve overall performances.</p></li></ul>


2021 ◽  
Vol 11 (5) ◽  
pp. 2268
Author(s):  
Erika Straková ◽  
Dalibor Lukáš ◽  
Zdenko Bobovský ◽  
Tomáš Kot ◽  
Milan Mihola ◽  
...  

While repairing industrial machines or vehicles, recognition of components is a critical and time-consuming task for a human. In this paper, we propose to automatize this task. We start with a Principal Component Analysis (PCA), which fits the scanned point cloud with an ellipsoid by computing the eigenvalues and eigenvectors of a 3-by-3 covariant matrix. In case there is a dominant eigenvalue, the point cloud is decomposed into two clusters to which the PCA is applied recursively. In case the matching is not unique, we continue to distinguish among several candidates. We decompose the point cloud into planar and cylindrical primitives and assign mutual features such as distance or angle to them. Finally, we refine the matching by comparing the matrices of mutual features of the primitives. This is a more computationally demanding but very robust method. We demonstrate the efficiency and robustness of the proposed methodology on a collection of 29 real scans and a database of 389 STL (Standard Triangle Language) models. As many as 27 scans are uniquely matched to their counterparts from the database, while in the remaining two cases, there is only one additional candidate besides the correct model. The overall computational time is about 10 min in MATLAB.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Proceedings ◽  
2018 ◽  
Vol 2 (18) ◽  
pp. 1193
Author(s):  
Roi Santos ◽  
Xose Pardo ◽  
Xose Fdez-Vidal

The increasing use of autonomous UAVs inside buildings and around human-made structures demands new accurate and comprehensive representation of their operation environments. Most of the 3D scene abstraction methods use invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with inaccurate directions limit the understanding of scenes as those that include environments with poor texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a full method based on the matching of lines that provides a complementary approach to state-of-the-art methods when facing 3D scene representation of poor texture environments for future autonomous UAV.


Author(s):  
J. Gehrung ◽  
M. Hebel ◽  
M. Arens ◽  
U. Stilla

Abstract. Change detection is an important tool for processing multiple epochs of mobile LiDAR data in an efficient manner, since it allows to cope with an otherwise time-consuming operation by focusing on regions of interest. State-of-the-art approaches usually either do not handle the case of incomplete observations or are computationally expensive. We present a novel method based on a combination of point clouds and voxels that is able to handle said case, thereby being computationally less expensive than comparable approaches. Furthermore, our method is able to identify special classes of changes such as partially moved, fully moved and deformed objects in addition to the appeared and disappeared objects recognized by conventional approaches. The performance of our method is evaluated using the publicly available TUM City Campus datasets, showing an overall accuracy of 88 %.


Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.


Author(s):  
Jairo R. Montoya-Torres ◽  
Libardo S. Gómez-Vizcaíno ◽  
Elyn L. Solano-Charris ◽  
Carlos D. Paternina-Arboleda

This paper examines the problem of jobshop scheduling with either makespan minimization or total tardiness minimization, which are both known to be NP-hard. The authors propose the use of a meta-heuristic procedure inspired from bacterial phototaxis. This procedure, called Global Bacteria Optimization (GBO), emulates the reaction of some organisms (bacteria) to light stimulation. Computational experiments are performed using well-known instances from literature. Results show that the algorithm equals and even outperforms previous state-of-the-art procedures in terms of quality of solution and requires very short computational time.


Author(s):  
Xiang Kong ◽  
Qizhe Xie ◽  
Zihang Dai ◽  
Eduard Hovy

Mixture of Softmaxes (MoS) has been shown to be effective at addressing the expressiveness limitation of Softmax-based models. Despite the known advantage, MoS is practically sealed by its large consumption of memory and computational time due to the need of computing multiple Softmaxes. In this work, we set out to unleash the power of MoS in practical applications by investigating improved word coding schemes, which could effectively reduce the vocabulary size and hence relieve the memory and computation burden. We show both BPE and our proposed Hybrid-LightRNN lead to improved encoding mechanisms that can halve the time and memory consumption of MoS without performance losses. With MoS, we achieve an improvement of 1.5 BLEU scores on IWSLT 2014 German-to-English corpus and an improvement of 0.76 CIDEr score on image captioning. Moreover, on the larger WMT 2014 machine translation dataset, our MoSboosted Transformer yields 29.6 BLEU score for English-toGerman and 42.1 BLEU score for English-to-French, outperforming the single-Softmax Transformer by 0.9 and 0.4 BLEU scores respectively and achieving the state-of-the-art result on WMT 2014 English-to-German task.


Sign in / Sign up

Export Citation Format

Share Document