scholarly journals Fractional Super-Resolution of Voxelized Point Clouds

Author(s):  
Ricardo de Queiroz ◽  
DIOGO GARCIA ◽  
Tomas Borges

<div>We present a method to super-resolve voxelized point clouds down-sampled by a fractional factor, using look-up-tables (LUT) constructed from self-similarities from its own down-sampled neighborhoods. Given a down-sampled point cloud geometry Vd, and its corresponding fractional down-sampling factor s, the proposed method determines the set of positions that may have generated Vd, and estimates which of these positions were indeed occupied (super-resolution). Assuming that the geometry of a point cloud is approximately self-similar at different scales, LUTs relating down-sampled neighborhood configurations with children occupancy configurations can be estimated by further down-sampling the input point cloud to Vd2 , and by taking into account the irregular children distribution derived from fractional down-sampling. For completeness, we also interpolate texture by averaging colors from adjacent neighbors. We present extensive test results over different point clouds, showing the effectiveness of the proposed method against baseline methods.</div>

2021 ◽  
Author(s):  
Ricardo de Queiroz ◽  
DIOGO GARCIA ◽  
Tomas Borges

<div>We present a method to super-resolve voxelized point clouds down-sampled by a fractional factor, using look-up-tables (LUT) constructed from self-similarities from its own down-sampled neighborhoods. Given a down-sampled point cloud geometry Vd, and its corresponding fractional down-sampling factor s, the proposed method determines the set of positions that may have generated Vd, and estimates which of these positions were indeed occupied (super-resolution). Assuming that the geometry of a point cloud is approximately self-similar at different scales, LUTs relating down-sampled neighborhood configurations with children occupancy configurations can be estimated by further down-sampling the input point cloud to Vd2 , and by taking into account the irregular children distribution derived from fractional down-sampling. For completeness, we also interpolate texture by averaging colors from adjacent neighbors. We present extensive test results over different point clouds, showing the effectiveness of the proposed method against baseline methods.</div>


Author(s):  
P. Hu ◽  
Y. Liu ◽  
M. Tian ◽  
M. Hou

Abstract. Plane segmentation from the point cloud is an important step in various types of geo-information related to human activities. In this paper, we present a new approach to accurate segment planar primitives simultaneously by transforming it into the best matching issue between the over-segmented super-voxels and the 3D plane models. The super-voxels and its adjacent topological graph are firstly derived from the input point cloud as over-segmented small patches. Such initial 3D plane models are then enriched by fitting centroids of randomly sampled super-voxels, and translating these grouped planar super-voxels by structured scene prior (e.g. orthogonality, parallelism), while the generated adjacent graph will be updated along with planar clustering. To achieve the final super-voxels to planes assignment problem, an energy minimization framework is constructed using the productions of candidate planes, initial super-voxels, and the improved adjacent graph, and optimized to segment multiple consistent planar surfaces in the scenes simultaneously. The proposed algorithms are implemented, and three types of point clouds differing in feature characteristics (e.g. point density, complexity) are mainly tested to validate the efficiency and effectiveness of our segmentation method.


2018 ◽  
Vol 12 (3) ◽  
pp. 356-368 ◽  
Author(s):  
Nao Hidaka ◽  
Takashi Michikawa ◽  
Ali Motamedi ◽  
Nobuyoshi Yabuki ◽  
Tomohiro Fukuda ◽  
...  

This paper proposes a novel method for polygonizing scanned point cloud data of tunnels to feature-preserved polygons to be used for maintenance purposes. The proposed method uses 2D cross-sections of structures and polygonizes them by a lofting operation. In order to extract valid cross-sections from the input point cloud, center lines and orthogonal planes are used. Center lines of the point cloud are extracted using local symmetry analysis. In addition, this research segments a point cloud of a tunnel into lining concrete, road, and other facilities. The results of applying the proposed method to the point clouds of three types of tunnels are demonstrated, and the advantages and limitations of the proposed method are discussed.


Author(s):  
A. V. Vo ◽  
C. N. Lokugam Hewage ◽  
N. A. Le Khac ◽  
M. Bertolotto ◽  
D. Laefer

Abstract. Point density is an important property that dictates the usability of a point cloud data set. This paper introduces an efficient, scalable, parallel algorithm for computing the local point density index, a sophisticated point cloud density metric. Computing the local point density index is non-trivial, because this computation involves a neighbour search that is required for each, individual point in the potentially large, input point cloud. Most existing algorithms and software are incapable of computing point density at scale. Therefore, the algorithm introduced in this paper aims to address both the needed computational efficiency and scalability for considering this factor in large, modern point clouds such as those collected in national or regional scans. The proposed algorithm is composed of two stages. In stage 1, a point-level, parallel processing step is performed to partition an unstructured input point cloud into partially overlapping, buffered tiles. A buffer is provided around each tile so that the data partitioning does not introduce spatial discontinuity into the final results. In stage 2, the buffered tiles are distributed to different processors for computing the local point density index in parallel. That tile-level parallel processing step is performed using a conventional algorithm with an R-tree data structure. While straight-forward, the proposed algorithm is efficient and particularly suitable for processing large point clouds. Experiments conducted using a 1.4 billion point data set acquired over part of Dublin, Ireland demonstrated an efficiency factor of up to 14.8/16. More specifically, the computational time was reduced by 14.8 times when the number of processes (i.e. executors) increased by 16 times. Computing the local point density index for the 1.4 billion point data set took just over 5 minutes with 16 executors and 8 cores per executor. The reduction in computational time was nearly 70 times compared to the 6 hours required without parallelism.


Author(s):  
K. Oda ◽  
S. Hattori ◽  
H. Saeki ◽  
T. Takayama ◽  
R. Honma

This paper proposes a qualification method of a point cloud created by SfM (Structure-from-Motion) software. Recently, SfM software is popular for creating point clouds. Point clouds created by SfM Software seems to be correct, but in many cases, the result does not have correct scale, or does not have correct coordinates in reference coordinate system, and in these cases it is hard to evaluate the quality of the point clouds. To evaluate this correctness of the point clouds, we propose to use the difference between point clouds with different source of images. If the shape of the point clouds with different source of images is correct, two shapes of different source might be almost same. To compare the two or more shapes of point cloud, iterative-closest-point (ICP) is implemented. Transformation parameters (rotation and translation) are iteratively calculated so as to minimize sum of squares of distances. This paper describes the procedure of the evaluation and some test results.


2021 ◽  
Vol 13 (18) ◽  
pp. 3665
Author(s):  
Jaehoon Jung ◽  
Jaebin Lee ◽  
Christopher E. Parrish

A current hindrance to the scientific use of available bathymetric lidar point clouds is the frequent lack of accurate and thorough segmentation of seafloor points. Furthermore, scientific end-users typically lack access to waveforms, trajectories, and other upstream data, and also do not have the time or expertise to perform extensive manual point cloud editing. To address these needs, this study seeks to develop and test a novel clustering approach to seafloor segmentation that solely uses georeferenced point clouds. The proposed approach does not make any assumptions regarding the statistical distribution of points in the input point cloud. Instead, the approach organizes the point cloud into an inverse histogram and finds a gap that best separates the seafloor using the proposed peak-detection method. The proposed approach is evaluated with datasets acquired in Florida with a Riegl VQ-880-G bathymetric LiDAR system. The parameters are optimized through a sensitivity analysis with a point-wise comparison between the extracted seafloor and ground truth. With optimized parameters, the proposed approach achieved F1-scores of 98.14–98.77%, which outperforms three popular existing methods. Further, we compared seafloor points with Reson 8125 MBES hydrographic survey data. The results indicate that seafloor points were detected successfully with vertical errors of −0.190 ± 0.132 m and −0.185 ± 0.119 m (μ ± σ) for two test datasets.


Author(s):  
N. Hidaka ◽  
T. Michikawa ◽  
N. Yabuki ◽  
T. Fukuda ◽  
A. Motamedi

The existing civil structures must be maintained in order to ensure their expected lifelong serviceability. Careful rehabilitation and maintenance planning plays a significant role in that effort. Recently, construction information modelling (CIM) techniques, such as product models, are increasingly being used to facilitate structure maintenance. Using this methodology, laser scanning systems can provide point cloud data that are used to produce highly accurate and dense representations of civil structures. However, while numerous methods for creating a single surface exist, part decomposition is required in order to create product models consisting of more than one part. This research aims at the development of a surface reconstruction system that utilizes point cloud data efficiently in order to create complete product models. The research proposes using the application of local shape matching to the input point clouds in order to define a set of representative parts. These representative parts are then polygonized and copied to locations where the same types of parts exist. The results of our experiments show that the proposed method can efficiently create product models using input point cloud data.


Author(s):  
J. Balado ◽  
L. Díaz-Vilariño ◽  
P. Arias ◽  
E. Frías

<p><strong>Abstract.</strong> Increase in building complexity can cause difficulties orienting people, especially people with reduced mobility. This work presents a methodology to enable the direct use of indoor point clouds as navigable models for pathfinding. Input point cloud is classified in horizontal and vertical elements according to inclination of each point respect to n neighbour points. Points belonging to the main floor are detected by histogram application. Other floors at different heights and stairs are detected by analysing the proximity to the detected main floor. Then, point cloud regions classified as floor are rasterized to delimit navigable surface and occlusions are corrected by applying morphological operations assuming planarity and taking into account the existence of obstacles. Finally, point cloud of navigable floor is downsampled and structured in a grid. Remaining points are nodes to create navigable indoor graph. The methodology has been tested in two real case studies provided by the ISPRS benchmark on indoor modelling. A pathfinding algorithm is applied to generate routes and to verify the usability of generated graphs. Generated models and routes are coherent with selected motor skills because routes avoid obstacles and can cross areas of non-acquired data. The proposed methodology allows to use point clouds directly as navigation graphs, without an intermediate phase of generating parametric model of surfaces.</p>


Author(s):  
E. Frías ◽  
J. Balado ◽  
L. Díaz-Vilariño ◽  
H. Lorenzo

Abstract. Room segmentation is a matter of ongoing interesting for indoor navigation and reconstruction in robotics and AEC. While in robotics field, the problem room segmentation has been typically addressed on 2D floorplan, interest in enrichment 3D models providing more detailed representation of indoors has been growing in the AEC. Point clouds make available more realistic and update but room segmentation from point clouds is still a challenging topic. This work presents a method to carried out point cloud segmentation into rooms based on 3D mathematical morphological operations. First, the input point cloud is voxelized and indoor empty voxels are extracted by CropHull algorithm. Then, a morphological erosion is performed on the 3D image of indoor empty voxels in order to break connectivity between voxels belonging to adjacent rooms. Remaining voxels after erosion are clustered by a 3D connected components algorithm so that each room is individualized. Room morphology is retrieved by individual 3D morphological dilation on clustered voxels. Finally, unlabelled occupied voxels are classified according proximity to labelled empty voxels after dilation operation. The method was tested in two real cases and segmentation performance was evaluated with encouraging results.


2020 ◽  
Vol 1 (1) ◽  
pp. 14-20
Author(s):  
Jesus Balado Frias ◽  
Lucía Díaz-Vilariño ◽  
Ernesto Frías ◽  
Elena González

Cities are becoming more pedestrian-friendly, reducing traffic and promoting physical activity and walking. However, prolonged exposure to the sun can cause sunburn and skin problems, so minimizing exposure to the sun while travelling is especially relevant at certain latitudes and in the summer months. This paper proposes a method for modelling urban contours and generating pedestrian maps with the location of shaded areas and accessibility barriers. The proposed method uses as input data a point cloud of an urban environment acquired with Mobile Laser Scanning. First, the input point cloud is segmented in ground points, obstacle points, and points causing shadows. Then, the three segmented point clouds are rasterized and the corresponded images are combined to obtain the navigable ground and the shaded areas. Finally, from the navigable ground, a navigation map is generated for pedestrians. To check the usefulness of this navigation map, a pathfinding algorithm is applied. The results show a correct generation of the navigable ground, and routes prioritizing the trajectory by shadow areas. Depending on the weighting between sun and shaded areas, the routes obtained show differences in distance travelled and sun exposure. The proposed method is sensitive to the existence of obstacles and noise in the point clouds.


Sign in / Sign up

Export Citation Format

Share Document