scholarly journals AUTOMATIC HERITAGE BUILDING POINT CLOUD SEGMENTATION AND CLASSIFICATION USING GEOMETRICAL RULES

Author(s):  
A. Murtiyoso ◽  
P. Grussenmeyer

<p><strong>Abstract.</strong> The segmentation of a point cloud presents an important step in the 3D modelling process of heritage structures. This is true in many scale levels, including the segmentation, identification, and classification of architectural elements from the point cloud of a building. In this regard, historical buildings often present complex elements which render the 3D modelling process longer when performed manually. The aim of this paper is to explore approaches based on certain common geometric rules in order to segment, identify, and classify point clouds into architectural elements. In particular, the detection of attics and structural supports (i.e. columns and piers) will be addressed. Results show that the developed algorithm manages to detect supports in three separate data sets representing three different types of architecture. The algorithm also managed to identify the type of support and divide them into two groups: columns and piers. Overall, the developed method provides a fast and simple approach to classify point clouds automatically into several classes, with a mean success rate of 81.61&amp;thinsp;% and median success rate of 85.61&amp;thinsp% for three tested data sets.</p>

Author(s):  
M. Capone ◽  
E. Lanzara

Abstract. This paper presents a part from one broader research project on ribbed vaults. The main goal is to generate a parametric objects library for ribbed vaults, suitable both for HBIM system, for structural analysis or for Cultural Heritage dissemination. Starting from Treatises study we have analyzed different classification system and different terminology used for ribbed vaults components in different languages, especially in English, French, Spanish and Italian, our aim is to improve a multilingual vocabularies. In our research we have defined an experimental workflow to generate a set of ribbed vaults library based on the geometric rules from treatises and a controlled vocabulary, the comparison of these 3D models with point clouds allows us to identify the rule used or to define a new rule and, therefore, to build complex parametric models based on reality-based surveys. We are improving our parametric model using different geometric rules from Spanish, French and English manuals. We can generate the realty based model using the same parametric model, in this case the input data is the ribs geometry extracted from the point cloud. We use a generative tool to analyze the curves from point cloud and to draw the borders. We are going to test our tool on some case studies for historical architectural elements indexing, for geometries reconstruction in HBIM environment and for point cloud segmentation in DL process.


2019 ◽  
Vol 9 (16) ◽  
pp. 3273 ◽  
Author(s):  
Wen-Chung Chang ◽  
Van-Toan Pham

This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2161 ◽  
Author(s):  
Arnadi Murtiyoso ◽  
Pierre Grussenmeyer

3D heritage documentation has seen a surge in the past decade due to developments in reality-based 3D recording techniques. Several methods such as photogrammetry and laser scanning are becoming ubiquitous amongst architects, archaeologists, surveyors, and conservators. The main result of these methods is a 3D representation of the object in the form of point clouds. However, a solely geometric point cloud is often insufficient for further analysis, monitoring, and model predicting of the heritage object. The semantic annotation of point clouds remains an interesting research topic since traditionally it requires manual labeling and therefore a lot of time and resources. This paper proposes an automated pipeline to segment and classify multi-scalar point clouds in the case of heritage object. This is done in order to perform multi-level segmentation from the scale of a historical neighborhood up until that of architectural elements, specifically pillars and beams. The proposed workflow involves an algorithmic approach in the form of a toolbox which includes various functions covering the semantic segmentation of large point clouds into smaller, more manageable and semantically labeled clusters. The first part of the workflow will explain the segmentation and semantic labeling of heritage complexes into individual buildings, while a second part will discuss the use of the same toolbox to segment the resulting buildings further into architectural elements. The toolbox was tested on several historical buildings and showed promising results. The ultimate intention of the project is to help the manual point cloud labeling, especially when confronted with the large training data requirements of machine learning-based algorithms.


2019 ◽  
Vol 11 (23) ◽  
pp. 2727 ◽  
Author(s):  
Ming Huang ◽  
Pengcheng Wei ◽  
Xianglei Liu

Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency and poor segmentation effect. Hence, an efficient encoding voxel-based segmentation (EVBS) algorithm based on a fast adjacent voxel search is proposed in this study. First, a binary octree algorithm is proposed to construct the voxel as the segmentation object and code the voxel, which can compute voxel features quickly and accurately. Second, a voxel-based region growing algorithm is proposed to cluster the corresponding voxel to perform the initial point cloud segmentation, which can improve the rationality of seed selection. Finally, a refining point method is proposed to solve the problem of under-segmentation in unlabeled voxels by judging the relationship between the points and the segmented plane. Experimental results demonstrate that the proposed algorithm is better than the traditional algorithm in terms of computation time, extraction accuracy, and recall rate.


Author(s):  
K. Liu ◽  
J. Boehm

Point cloud segmentation is a fundamental problem in point processing. Segmenting a point cloud fully automatically is very challenging due to the property of point cloud as well as different requirements of distinct users. In this paper, an interactive segmentation method for point clouds is proposed. Only two strokes need to be drawn intuitively to indicate the target object and the background respectively. The draw strokes are sparse and don't necessarily cover the whole object. Given the strokes, a weighted graph is built and the segmentation is formulated as a minimization problem. The problem is solved efficiently by using the Max Flow Min Cut algorithm. In the experiments, the mobile mapping data of a city area is utilized. The resulting segmentations demonstrate the efficiency of the method that can be potentially applied for general point clouds.


2020 ◽  
Vol 37 (6) ◽  
pp. 1019-1027
Author(s):  
Ali Saglam ◽  
Hasan B. Makineci ◽  
Ömer K. Baykan ◽  
Nurdan Akhan Baykan

Point cloud processing is a struggled field because the points in the clouds are three-dimensional and irregular distributed signals. For this reason, the points in the point clouds are mostly sampled into regularly distributed voxels in the literature. Voxelization as a pretreatment significantly accelerates the process of segmenting surfaces. The geometric cues such as plane directions (normals) in the voxels are mostly used to segment the local surfaces. However, the sampling process may include a non-planar point group (patch), which is mostly on the edges and corners, in a voxel. These voxels can cause misleading the segmentation process. In this paper, we separate the non-planar patches into planar sub-patches using k-means clustering. The largest one among the planar sub-patches replaces the normal and barycenter properties of the voxel with those of itself. We have tested this process in a successful point cloud segmentation method and measure the effects of the proposed method on two point cloud segmentation datasets (Mosque and Train Station). The method increases the accuracy success of the Mosque dataset from 83.84% to 87.86% and that of the Train Station dataset from 85.36% to 87.07%.


Author(s):  
H.-J. Przybilla ◽  
M. Lindstaedt ◽  
T. Kersten

<p><strong>Abstract.</strong> The quality of image-based point clouds generated from images of UAV aerial flights is subject to various influencing factors. In addition to the performance of the sensor used (a digital camera), the image data format (e.g. TIF or JPG) is another important quality parameter. At the UAV test field at the former Zollern colliery (Dortmund, Germany), set up by Bochum University of Applied Sciences, a medium-format camera from Phase One (IXU 1000) was used to capture UAV image data in RAW format. This investigation aims at evaluating the influence of the image data format on point clouds generated by a Dense Image Matching process. Furthermore, the effects of different data filters, which are part of the evaluation programs, were considered. The processing was carried out with two software packages from Agisoft and Pix4D on the basis of both generated TIF or JPG data sets. The point clouds generated are the basis for the investigation presented in this contribution. Point cloud comparisons with reference data from terrestrial laser scanning were performed on selected test areas representing object-typical surfaces (with varying surface structures). In addition to these area-based comparisons, selected linear objects (profiles) were evaluated between the different data sets. Furthermore, height point deviations from the dense point clouds were determined using check points. Differences in the results generated through the two software packages used could be detected. The reasons for these differences are filtering settings used for the generation of dense point clouds. It can also be assumed that there are differences in the algorithms for point cloud generation which are implemented in the two software packages. The slightly compressed JPG image data used for the point cloud generation did not show any significant changes in the quality of the examined point clouds compared to the uncompressed TIF data sets.</p>


Author(s):  
J. Román ◽  
P. M. Lerones ◽  
J. Llamas ◽  
E. Zalama ◽  
J. Gómez-García-Bermejo

<p><strong>Abstract.</strong> 3D laser scanning and photogrammetric 3D reconstruction generate point clouds from which the geometry of BIM models can be created. However, a few methods do this automatically for concrete architectural elements, but in no case for the entirety of heritage assets. A novel procedure for the automatic recognition and parametrization of non-planar surfaces of heritage immovable assets is presented using point clouds as raw input data. The methodology is able to detect the most relevant architectural features in a point cloud and their interdependences through the analysis of the intersections of related elements. The non-planar surfaces detected, mainly cylinders, are studied in relation to the neighbouring planar surfaces present in the cloud so that the boundaries of both the planar and the non-planar surfaces are accurately defined. The procedure is applied to the emblematic Castle of Torrelobatón, located in Valladolid (Spain) to allow the cataloguing of required elements, as illustrative example of the European defensive architecture from the Middle age to the Renaissance period. Results and conclusions are reported to evaluate the performance of this approach.</p>


Author(s):  
M. Bassier ◽  
M. Bonduel ◽  
B. Van Genechten ◽  
M. Vergauwen

Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent.<br><br> In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.


2020 ◽  
Vol 12 (18) ◽  
pp. 2884
Author(s):  
Qingwang Liu ◽  
Liyong Fu ◽  
Qiao Chen ◽  
Guangxing Wang ◽  
Peng Luo ◽  
...  

Forest canopy height is one of the most important spatial characteristics for forest resource inventories and forest ecosystem modeling. Light detection and ranging (LiDAR) can be used to accurately detect canopy surface and terrain information from the backscattering signals of laser pulses, while photogrammetry tends to accurately depict the canopy surface envelope. The spatial differences between the canopy surfaces estimated by LiDAR and photogrammetry have not been investigated in depth. Thus, this study aims to assess LiDAR and photogrammetry point clouds and analyze the spatial differences in canopy heights. The study site is located in the Jigongshan National Nature Reserve of Henan Province, Central China. Six data sets, including one LiDAR data set and five photogrammetry data sets captured from an unmanned aerial vehicle (UAV), were used to estimate the forest canopy heights. Three spatial distribution descriptors, namely, the effective cell ratio (ECR), point cloud homogeneity (PCH) and point cloud redundancy (PCR), were developed to assess the LiDAR and photogrammetry point clouds in the grid. The ordinary neighbor (ON) and constrained neighbor (CN) interpolation algorithms were used to fill void cells in digital surface models (DSMs) and canopy height models (CHMs). The CN algorithm could be used to distinguish small and large holes in the CHMs. The optimal spatial resolution was analyzed according to the ECR changes of DSMs or CHMs resulting from the CN algorithms. Large negative and positive variations were observed between the LiDAR and photogrammetry canopy heights. The stratified mean difference in canopy heights increased gradually from negative to positive when the canopy heights were greater than 3 m, which means that photogrammetry tends to overestimate low canopy heights and underestimate high canopy heights. The CN interpolation algorithm achieved smaller relative root mean square errors than the ON interpolation algorithm. This article provides an operational method for the spatial assessment of point clouds and suggests that the variations between LiDAR and photogrammetry CHMs should be considered when modeling forest parameters.


Sign in / Sign up

Export Citation Format

Share Document