scholarly journals POINT CLOUDS TO DIRECT INDOOR PEDESTRIAN PATHFINDING

Author(s):  
J. Balado ◽  
L. Díaz-Vilariño ◽  
P. Arias ◽  
E. Frías

<p><strong>Abstract.</strong> Increase in building complexity can cause difficulties orienting people, especially people with reduced mobility. This work presents a methodology to enable the direct use of indoor point clouds as navigable models for pathfinding. Input point cloud is classified in horizontal and vertical elements according to inclination of each point respect to n neighbour points. Points belonging to the main floor are detected by histogram application. Other floors at different heights and stairs are detected by analysing the proximity to the detected main floor. Then, point cloud regions classified as floor are rasterized to delimit navigable surface and occlusions are corrected by applying morphological operations assuming planarity and taking into account the existence of obstacles. Finally, point cloud of navigable floor is downsampled and structured in a grid. Remaining points are nodes to create navigable indoor graph. The methodology has been tested in two real case studies provided by the ISPRS benchmark on indoor modelling. A pathfinding algorithm is applied to generate routes and to verify the usability of generated graphs. Generated models and routes are coherent with selected motor skills because routes avoid obstacles and can cross areas of non-acquired data. The proposed methodology allows to use point clouds directly as navigation graphs, without an intermediate phase of generating parametric model of surfaces.</p>

Author(s):  
E. Frías ◽  
J. Balado ◽  
L. Díaz-Vilariño ◽  
H. Lorenzo

Abstract. Room segmentation is a matter of ongoing interesting for indoor navigation and reconstruction in robotics and AEC. While in robotics field, the problem room segmentation has been typically addressed on 2D floorplan, interest in enrichment 3D models providing more detailed representation of indoors has been growing in the AEC. Point clouds make available more realistic and update but room segmentation from point clouds is still a challenging topic. This work presents a method to carried out point cloud segmentation into rooms based on 3D mathematical morphological operations. First, the input point cloud is voxelized and indoor empty voxels are extracted by CropHull algorithm. Then, a morphological erosion is performed on the 3D image of indoor empty voxels in order to break connectivity between voxels belonging to adjacent rooms. Remaining voxels after erosion are clustered by a 3D connected components algorithm so that each room is individualized. Room morphology is retrieved by individual 3D morphological dilation on clustered voxels. Finally, unlabelled occupied voxels are classified according proximity to labelled empty voxels after dilation operation. The method was tested in two real cases and segmentation performance was evaluated with encouraging results.


2020 ◽  
Vol 10 (4) ◽  
pp. 1235 ◽  
Author(s):  
Massimiliano Pepe ◽  
Domenica Costantino ◽  
Alfredo Restuccia Garofalo

The aim of this work is to identify an efficient pipeline in order to build HBIM (heritage building information modelling) and create digital models to be used in structural analysis. To build accurate 3D models it is first necessary to perform a geomatics survey. This means performing a survey with active or passive sensors and, subsequently, accomplishing adequate post-processing of the data. In this way, it is possible to obtain a 3D point cloud of the structure under investigation. The next step, known as “scan-to-BIM (building information modelling)”, has led to the creation of an appropriate methodology that involved the use of Rhinoceros software and a few tools developed within this environment. Once the 3D model is obtained, the last step is the implementation of the structure in FEM (finite element method) and/or in HBIM software. In this paper, two case studies involving structures belonging to the cultural heritage (CH) environment are analysed: a historical church and a masonry bridge. In particular, for both case studies, the different phases were described involving the construction of the point cloud and, subsequently, the construction of a 3D model. This model is suitable both for structural analysis and for the parameterization of rheological and geometric information of each single element of the structure.


Author(s):  
P. Hu ◽  
Y. Liu ◽  
M. Tian ◽  
M. Hou

Abstract. Plane segmentation from the point cloud is an important step in various types of geo-information related to human activities. In this paper, we present a new approach to accurate segment planar primitives simultaneously by transforming it into the best matching issue between the over-segmented super-voxels and the 3D plane models. The super-voxels and its adjacent topological graph are firstly derived from the input point cloud as over-segmented small patches. Such initial 3D plane models are then enriched by fitting centroids of randomly sampled super-voxels, and translating these grouped planar super-voxels by structured scene prior (e.g. orthogonality, parallelism), while the generated adjacent graph will be updated along with planar clustering. To achieve the final super-voxels to planes assignment problem, an energy minimization framework is constructed using the productions of candidate planes, initial super-voxels, and the improved adjacent graph, and optimized to segment multiple consistent planar surfaces in the scenes simultaneously. The proposed algorithms are implemented, and three types of point clouds differing in feature characteristics (e.g. point density, complexity) are mainly tested to validate the efficiency and effectiveness of our segmentation method.


2021 ◽  
Author(s):  
Ricardo de Queiroz ◽  
DIOGO GARCIA ◽  
Tomas Borges

<div>We present a method to super-resolve voxelized point clouds down-sampled by a fractional factor, using look-up-tables (LUT) constructed from self-similarities from its own down-sampled neighborhoods. Given a down-sampled point cloud geometry Vd, and its corresponding fractional down-sampling factor s, the proposed method determines the set of positions that may have generated Vd, and estimates which of these positions were indeed occupied (super-resolution). Assuming that the geometry of a point cloud is approximately self-similar at different scales, LUTs relating down-sampled neighborhood configurations with children occupancy configurations can be estimated by further down-sampling the input point cloud to Vd2 , and by taking into account the irregular children distribution derived from fractional down-sampling. For completeness, we also interpolate texture by averaging colors from adjacent neighbors. We present extensive test results over different point clouds, showing the effectiveness of the proposed method against baseline methods.</div>


Author(s):  
Z. Hui ◽  
P. Cheng ◽  
L. Wang ◽  
Y. Xia ◽  
H. Hu ◽  
...  

<p><strong>Abstract.</strong> Denoising is a key pre-processing step for many airborne LiDAR point cloud applications. However, the previous algorithms have a number of problems, which affect the quality of point cloud post-processing, such as DTM generation. In this paper, a novel automated denoising algorithm is proposed based on empirical mode decomposition to remove outliers from airborne LiDAR point cloud. Comparing with traditional point cloud denoising algorithms, the proposed method can detect outliers from a signal processing perspective. Firstly, airborne LiDAR point clouds are decomposed into a series of intrinsic mode functions with the help of morphological operations, which would significantly decrease the computational complexity. By applying OTSU algorithm to these intrinsic mode functions, noise-dominant components can be detected and filtered. Finally, outliers are detected automatically by comparing observed elevations and reconstructed elevations. Three datasets located at three different cities in China were used to verify the validity and robustness of the proposed method. The experimental results demonstrate that the proposed method removes both high and low outliers effectively with various terrain features while preserving useful ground details.</p>


2018 ◽  
Vol 12 (3) ◽  
pp. 356-368 ◽  
Author(s):  
Nao Hidaka ◽  
Takashi Michikawa ◽  
Ali Motamedi ◽  
Nobuyoshi Yabuki ◽  
Tomohiro Fukuda ◽  
...  

This paper proposes a novel method for polygonizing scanned point cloud data of tunnels to feature-preserved polygons to be used for maintenance purposes. The proposed method uses 2D cross-sections of structures and polygonizes them by a lofting operation. In order to extract valid cross-sections from the input point cloud, center lines and orthogonal planes are used. Center lines of the point cloud are extracted using local symmetry analysis. In addition, this research segments a point cloud of a tunnel into lining concrete, road, and other facilities. The results of applying the proposed method to the point clouds of three types of tunnels are demonstrated, and the advantages and limitations of the proposed method are discussed.


2021 ◽  
Author(s):  
Ricardo de Queiroz ◽  
DIOGO GARCIA ◽  
Tomas Borges

<div>We present a method to super-resolve voxelized point clouds down-sampled by a fractional factor, using look-up-tables (LUT) constructed from self-similarities from its own down-sampled neighborhoods. Given a down-sampled point cloud geometry Vd, and its corresponding fractional down-sampling factor s, the proposed method determines the set of positions that may have generated Vd, and estimates which of these positions were indeed occupied (super-resolution). Assuming that the geometry of a point cloud is approximately self-similar at different scales, LUTs relating down-sampled neighborhood configurations with children occupancy configurations can be estimated by further down-sampling the input point cloud to Vd2 , and by taking into account the irregular children distribution derived from fractional down-sampling. For completeness, we also interpolate texture by averaging colors from adjacent neighbors. We present extensive test results over different point clouds, showing the effectiveness of the proposed method against baseline methods.</div>


Author(s):  
L. Barazzetti ◽  
M. Previtali ◽  
F. Roncoroni

<p><strong>Abstract.</strong> This paper presents a strategy to measure verticality deviations (i.e. inclination) of tall chimneys. The method uses laser scanning point clouds acquired around the chimney to estimate vertical deviations with millimeter-level precision. Horizontal slices derived from the point cloud allows us to inspect the geometry of the chimney at different heights. Two methods able to estimate the center at different levels are illustrated and discussed. A first solution is a manual approach that uses traditional CAD software, in which circle fitting is manually carried out through point cloud slices. The second method is instead automatic and provides not only center coordinates, but also statistics to evaluate metric quality. Two case studies are used to explain the procedures for the digital survey and the measurement of vertical deviations: the chimney in the old slaughterhouse of Piacenza (Italy), and the chimney in Leonardo Campus at Politecnico di Milano (Italy).</p>


Author(s):  
A. V. Vo ◽  
C. N. Lokugam Hewage ◽  
N. A. Le Khac ◽  
M. Bertolotto ◽  
D. Laefer

Abstract. Point density is an important property that dictates the usability of a point cloud data set. This paper introduces an efficient, scalable, parallel algorithm for computing the local point density index, a sophisticated point cloud density metric. Computing the local point density index is non-trivial, because this computation involves a neighbour search that is required for each, individual point in the potentially large, input point cloud. Most existing algorithms and software are incapable of computing point density at scale. Therefore, the algorithm introduced in this paper aims to address both the needed computational efficiency and scalability for considering this factor in large, modern point clouds such as those collected in national or regional scans. The proposed algorithm is composed of two stages. In stage 1, a point-level, parallel processing step is performed to partition an unstructured input point cloud into partially overlapping, buffered tiles. A buffer is provided around each tile so that the data partitioning does not introduce spatial discontinuity into the final results. In stage 2, the buffered tiles are distributed to different processors for computing the local point density index in parallel. That tile-level parallel processing step is performed using a conventional algorithm with an R-tree data structure. While straight-forward, the proposed algorithm is efficient and particularly suitable for processing large point clouds. Experiments conducted using a 1.4 billion point data set acquired over part of Dublin, Ireland demonstrated an efficiency factor of up to 14.8/16. More specifically, the computational time was reduced by 14.8 times when the number of processes (i.e. executors) increased by 16 times. Computing the local point density index for the 1.4 billion point data set took just over 5 minutes with 16 executors and 8 cores per executor. The reduction in computational time was nearly 70 times compared to the 6 hours required without parallelism.


Author(s):  
M. Capone ◽  
E. Lanzara

<p><strong>Abstract.</strong> The contribution is part of a research that aims to address the problems of knowledge, interpretation and documentation of coffered domes geometry. The main question is to define the relationships between the coffer shape, the layout used for coffers distribution on dome surface and different kind of surfaces. With regard to coffered domes we have analyzed the methods illustrated by Francesco Milizia, Giuseppe Vannini and some historical surveys. We have grouped coffered domes in relation to grid geometry and to coffer shape. We have defined three different ways to distribute the coffers in relation to different grid layout: grid composed by 2D lines (meridians and parallels), grid composed by 3D lines on surface (lattice of rhumb lines) or coffers distribution between ribs. We have analyzed each of them and we have defined algorithmic models in relation to spherical domes. The main goal of our research is to study what's different in not spherical domes, such as policentrical domes, ellipsoidal domes or ovoidal domes, generated using curves network. We have compared computational models based on treatises rules with particular case studies. This comparison allows us to do a critical analysis based on geometric rules. From a methodological point of view we have built a parametric model able to connect the different processes, using the same parameters. By comparing this model with point clouds, it is possible to evaluate analogies or identify new rules that will be used to develop a more complex parametric model based on surveys.</p>


Sign in / Sign up

Export Citation Format

Share Document