input point
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 22)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 40 (5) ◽  
pp. 1-14
Author(s):  
Gal Metzer ◽  
Rana Hanocka ◽  
Raja Giryes ◽  
Daniel Cohen-Or

We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud. Unlike other point up-sampling methods which analyze shapes via local patches, in this work, we learn from global subsets. We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network. Specifically, we define source and target subsets according to the desired consolidation criteria (e.g., generating sharp points or points in sparse regions). The network learns a mapping from source to target subsets, and implicitly learns to consolidate the point cloud. During inference, the network is fed with random subsets of points from the input, which it displaces to synthesize a consolidated point set. We leverage the inductive bias of neural networks to eliminate noise and outliers, a notoriously difficult problem in point cloud consolidation. The shared weights of the network are optimized over the entire shape, learning non-local statistics and exploiting the recurrence of local-scale geometries. Specifically, the network encodes the distribution of the underlying shape surface within a fixed set of local kernels, which results in the best explanation of the underlying shape surface. We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.


Author(s):  
A. V. Vo ◽  
C. N. Lokugam Hewage ◽  
N. A. Le Khac ◽  
M. Bertolotto ◽  
D. Laefer

Abstract. Point density is an important property that dictates the usability of a point cloud data set. This paper introduces an efficient, scalable, parallel algorithm for computing the local point density index, a sophisticated point cloud density metric. Computing the local point density index is non-trivial, because this computation involves a neighbour search that is required for each, individual point in the potentially large, input point cloud. Most existing algorithms and software are incapable of computing point density at scale. Therefore, the algorithm introduced in this paper aims to address both the needed computational efficiency and scalability for considering this factor in large, modern point clouds such as those collected in national or regional scans. The proposed algorithm is composed of two stages. In stage 1, a point-level, parallel processing step is performed to partition an unstructured input point cloud into partially overlapping, buffered tiles. A buffer is provided around each tile so that the data partitioning does not introduce spatial discontinuity into the final results. In stage 2, the buffered tiles are distributed to different processors for computing the local point density index in parallel. That tile-level parallel processing step is performed using a conventional algorithm with an R-tree data structure. While straight-forward, the proposed algorithm is efficient and particularly suitable for processing large point clouds. Experiments conducted using a 1.4 billion point data set acquired over part of Dublin, Ireland demonstrated an efficiency factor of up to 14.8/16. More specifically, the computational time was reduced by 14.8 times when the number of processes (i.e. executors) increased by 16 times. Computing the local point density index for the 1.4 billion point data set took just over 5 minutes with 16 executors and 8 cores per executor. The reduction in computational time was nearly 70 times compared to the 6 hours required without parallelism.


2021 ◽  
Vol 13 (19) ◽  
pp. 3844
Author(s):  
Mengchi Ai ◽  
Zhixin Li ◽  
Jie Shan

Indoor structures are composed of ceilings, walls and floors that need to be modeled for a variety of applications. This paper proposes an approach to reconstructing models of indoor structures in complex environments. First, semantic pre-processing, including segmentation and occlusion construction, is applied to segment the input point clouds to generate semantic patches of structural primitives with uniform density. Then, a primitives extraction method with detected boundary is introduced to approximate both the mathematical surface and the boundary of the patches. Finally, a constraint-based model reconstruction is applied to achieve the final topologically consistent structural model. Under this framework, both the geometric and structural constraints are considered in a holistic manner to assure topologic regularity. Experiments were carried out with both synthetic and real-world datasets. The accuracy of the proposed method achieved an overall reconstruction quality of approximately 4.60 cm of root mean square error (RMSE) and 94.10% Intersection over Union (IoU) of the input point cloud. The development can be applied for structural reconstruction of various complex indoor environments.


2021 ◽  
Vol 13 (18) ◽  
pp. 3665
Author(s):  
Jaehoon Jung ◽  
Jaebin Lee ◽  
Christopher E. Parrish

A current hindrance to the scientific use of available bathymetric lidar point clouds is the frequent lack of accurate and thorough segmentation of seafloor points. Furthermore, scientific end-users typically lack access to waveforms, trajectories, and other upstream data, and also do not have the time or expertise to perform extensive manual point cloud editing. To address these needs, this study seeks to develop and test a novel clustering approach to seafloor segmentation that solely uses georeferenced point clouds. The proposed approach does not make any assumptions regarding the statistical distribution of points in the input point cloud. Instead, the approach organizes the point cloud into an inverse histogram and finds a gap that best separates the seafloor using the proposed peak-detection method. The proposed approach is evaluated with datasets acquired in Florida with a Riegl VQ-880-G bathymetric LiDAR system. The parameters are optimized through a sensitivity analysis with a point-wise comparison between the extracted seafloor and ground truth. With optimized parameters, the proposed approach achieved F1-scores of 98.14–98.77%, which outperforms three popular existing methods. Further, we compared seafloor points with Reson 8125 MBES hydrographic survey data. The results indicate that seafloor points were detected successfully with vertical errors of −0.190 ± 0.132 m and −0.185 ± 0.119 m (μ ± σ) for two test datasets.


2021 ◽  
Author(s):  
Ricardo de Queiroz ◽  
DIOGO GARCIA ◽  
Tomas Borges

<div>We present a method to super-resolve voxelized point clouds down-sampled by a fractional factor, using look-up-tables (LUT) constructed from self-similarities from its own down-sampled neighborhoods. Given a down-sampled point cloud geometry Vd, and its corresponding fractional down-sampling factor s, the proposed method determines the set of positions that may have generated Vd, and estimates which of these positions were indeed occupied (super-resolution). Assuming that the geometry of a point cloud is approximately self-similar at different scales, LUTs relating down-sampled neighborhood configurations with children occupancy configurations can be estimated by further down-sampling the input point cloud to Vd2 , and by taking into account the irregular children distribution derived from fractional down-sampling. For completeness, we also interpolate texture by averaging colors from adjacent neighbors. We present extensive test results over different point clouds, showing the effectiveness of the proposed method against baseline methods.</div>


2021 ◽  
Author(s):  
Ricardo de Queiroz ◽  
DIOGO GARCIA ◽  
Tomas Borges

<div>We present a method to super-resolve voxelized point clouds down-sampled by a fractional factor, using look-up-tables (LUT) constructed from self-similarities from its own down-sampled neighborhoods. Given a down-sampled point cloud geometry Vd, and its corresponding fractional down-sampling factor s, the proposed method determines the set of positions that may have generated Vd, and estimates which of these positions were indeed occupied (super-resolution). Assuming that the geometry of a point cloud is approximately self-similar at different scales, LUTs relating down-sampled neighborhood configurations with children occupancy configurations can be estimated by further down-sampling the input point cloud to Vd2 , and by taking into account the irregular children distribution derived from fractional down-sampling. For completeness, we also interpolate texture by averaging colors from adjacent neighbors. We present extensive test results over different point clouds, showing the effectiveness of the proposed method against baseline methods.</div>


Author(s):  
Y. Cao ◽  
M. Scaioni

Abstract. In recent research, fully supervised Deep Learning (DL) techniques and large amounts of pointwise labels are employed to train a segmentation network to be applied to buildings’ point clouds. However, fine-labelled buildings’ point clouds are hard to find and manually annotating pointwise labels is time-consuming and expensive. Consequently, the application of fully supervised DL for semantic segmentation of buildings’ point clouds at LoD3 level is severely limited. To address this issue, we propose a novel label-efficient DL network that obtains per-point semantic labels of LoD3 buildings’ point clouds with limited supervision. In general, it consists of two steps. The first step (Autoencoder – AE) is composed of a Dynamic Graph Convolutional Neural Network-based encoder and a folding-based decoder, designed to extract discriminative global and local features from input point clouds by reconstructing them without any label. The second step is semantic segmentation. By supplying a small amount of task-specific supervision, a segmentation network is proposed for semantically segmenting the encoded features acquired from the pre-trained AE. Experimentally, we evaluate our approach based on the ArCH dataset. Compared to the fully supervised DL methods, we find that our model achieved state-of-the-art results on the unseen scenes, with only 10% of labelled training data from fully supervised methods as input.


2021 ◽  
Vol 13 (8) ◽  
pp. 1520
Author(s):  
Emon Kumar Dey ◽  
Fayez Tarsha Kurdi ◽  
Mohammad Awrangjeb ◽  
Bela Stantic

Existing approaches that extract buildings from point cloud data do not select the appropriate neighbourhood for estimation of normals on individual points. However, the success of these approaches depends on correct estimation of the normal vector. In most cases, a fixed neighbourhood is selected without considering the geometric structure of the object and the distribution of the input point cloud. Thus, considering the object structure and the heterogeneous distribution of the point cloud, this paper proposes a new effective approach for selecting a minimal neighbourhood, which can vary for each input point. For each point, a minimal number of neighbouring points are iteratively selected. At each iteration, based on the calculated standard deviation from a fitted 3D line to the selected points, a decision is made adaptively about the neighbourhood. The selected minimal neighbouring points make the calculation of the normal vector accurate. The direction of the normal vector is then used to calculate the inside fold feature points. In addition, the Euclidean distance from a point to the calculated mean of its neighbouring points is used to make a decision about the boundary point. In the context of the accuracy evaluation, the experimental results confirm the competitive performance of the proposed approach of neighbourhood selection over the state-of-the-art methods. Based on our generated ground truth data, the proposed fold and boundary point extraction techniques show more than 90% F1-scores.


2021 ◽  
Vol 15 (1) ◽  
pp. 345-358
Author(s):  
Fouazou Lontouo Perez Broon ◽  
Thinh Dang ◽  
Emmanuel Fouotsa ◽  
Dustin Moody

Abstract Elliptic curves are typically defined by Weierstrass equations. Given a kernel, the well-known Vélu's formula shows how to explicitly write down an isogeny between Weierstrass curves. However, it is not clear how to do the same on other forms of elliptic curves without isomorphisms mapping to and from the Weierstrass form. Previous papers have shown some isogeny formulas for (twisted) Edwards, Huff, and Montgomery forms of elliptic curves. Continuing this line of work, this paper derives explicit formulas for isogenies between elliptic curves in (twisted) Hessian form. In addition, we examine the numbers of operations in the base field to compute the formulas. In comparison with other isogeny formulas, we note that our formulas for twisted Hessian curves have the lowest costs for processing the kernel and our X-affine formula has the lowest cost for processing an input point in affine coordinates.


Sign in / Sign up

Export Citation Format

Share Document