scholarly journals Wall Structure Geometry Verification Using TLS Data and BIM Model

2021 ◽  
Vol 11 (24) ◽  
pp. 11804
Author(s):  
Gabriela Bariczová ◽  
Ján Erdélyi ◽  
Richard Honti ◽  
Lukáš Tomek

Building information modeling (BIM) represents significant progress in the field of digitalization and informatization of the construction process. The virtual model (BIM model) is the source of graphic data among other information, which are applicable for geometry verification of the building’s structures. For this purpose, data and information about the building should be collected. Comparison of the BIM model (design) with as-built 3D models enables the evaluation of the quality of the as-built structures. The most effective methods for spatial data collection are terrestrial laser scanning (TLS) and close-range photogrammetry. Using both methods, measurement can result in a point cloud. The paper describes an approach for verifying the geometry of wall structures. The graphic data of designed structures are represented by the existing BIM model. The approach presented uses the Industry Foundation Classes (IFC) file format from which the designed geometry is derived. The as-built models of the structures are created from point clouds. Point cloud segmentation uses a combination of regression, filtering based on local normal vectors, and curve segmentation. Consequently, the designed and the as-built models (segmented from the point cloud) are compared.

Author(s):  
L. Barazzetti ◽  
M. Previtali

<p><strong>Abstract.</strong> Nowadays, the digital reconstruction of vaults is carried out using photogrammetric and laser scanning techniques able to capture the visible surface with dense point clouds. Then, different modeling strategies allow the generation of 3D models in various formats, such as meshes that interpolates the acquired point cloud, NURBS-based reconstructions based on manual, semi-automated, or automated procedures, and parametric objects for Building Information Modeling. This paper proposes a novel method that reconstructs the visible surface of a vault using neural networks. It is based on the assumption that vaults are not irregular free-form objects, but they can be reconstructed by mathematical functions calculated from the acquired point clouds. The proposed approach uses the point cloud to train a neural network that approximates vault surface. The achieved solution is not only able to consider the basic geometry of the vault, but also its irregularities that cannot be neglected in the case of accurate and detailed modeling projects of historical vaults. Considerations on the approximation capabilities of neural networks are illustrated and discussed along with the advantages of creating a mathematical representation encapsulated into a function.</p>


Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


2022 ◽  
Author(s):  
Lukas Winiwarter ◽  
Katharina Anders ◽  
Daniel Schröder ◽  
Bernhard Höfle

Abstract. 4D topographic point cloud data contain information on surface change processes and their spatial and temporal characteristics, such as the duration, location, and extent of mass movements, e.g., rockfalls or debris flows. To automatically extract and analyse change and activity patterns from this data, methods considering the spatial and temporal properties are required. The commonly used M3C2 point cloud distance reduces uncertainty through spatial averaging for bitemporal analysis. To extend this concept into the full 4D domain, we use a Kalman filter for point cloud change analysis. The filter incorporates M3C2 distances together with uncertainties obtained through error propagation as Bayesian priors in a dynamic model. The Kalman filter yields a smoothed estimate of the change time series for each spatial location, again associated with an uncertainty. Through the temporal smoothing, the Kalman filter uncertainty is, in general, lower than the individual bitemporal uncertainties, which therefore allows detection of more change as significant. In our example time series of bi-hourly terrestrial laser scanning point clouds of around 6 days (71 epochs) showcasing a rockfall-affected high-mountain slope in Tyrol, Austria, we are able to almost double the number of points where change is deemed significant (from 14.9 % to 28.6 % of the area of interest). Since the Kalman filter allows interpolation and, under certain constraints, also extrapolation of the time series, the estimated change values can be temporally resampled. This can be critical for subsequent analyses that are unable to deal with missing data, as may be caused by, e.g., foggy or rainy weather conditions. We demonstrate two different clustering approaches, transforming the 4D data into 2D map visualisations that can be easily interpreted by analysts. By comparison to two state-of-the-art 4D point cloud change methods, we highlight the main advantage of our method to be the extraction of a smoothed best estimate time series for change at each location. A main disadvantage of not being able to detect spatially overlapping change objects in a single pass remains. In conclusion, the consideration of combined temporal and spatial data enables a notable reduction in the associated uncertainty of the quantified change value for each point in space and time, in turn allowing the extraction of more information from the 4D point cloud dataset.


Author(s):  
Hoang Long Nguyen ◽  
David Belton ◽  
Petra Helmholz

The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS) systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s). The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc.) that can be used as the inputs for many processing steps (e.g. registration, modelling) that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.


Author(s):  
Beril Sirmacek ◽  
Roderik Lindenbergh ◽  
Jinhu Wang

3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.


Author(s):  
A. Kumar ◽  
K. Anders ◽  
L Winiwarter ◽  
B. Höfle

<p><strong>Abstract.</strong> 3D point clouds acquired by laser scanning and other techniques are difficult to interpret because of their irregular structure. To make sense of this data and to allow for the derivation of useful information, a segmentation of the points in groups, units, or classes fit for the specific use case is required. In this paper, we present a non-end-to-end deep learning classifier for 3D point clouds using multiple sets of input features and compare it with an implementation of the state-of-the-art deep learning framework PointNet++. We first start by extracting features derived from the local normal vector (normal vectors, eigenvalues, and eigenvectors) from the point cloud, and study the result of classification for different local search radii. We extract additional features related to spatial point distribution and use them together with the normal vector-based features. We find that the classification accuracy improves by up to 33% as we include normal vector features with multiple search radii and features related to spatial point distribution. Our method achieves a mean Intersection over Union (mIoU) of 94% outperforming PointNet++’s Multi Scale Grouping by up to 12%. The study presents the importance of multiple search radii for different point cloud features for classification in an urban 3D point cloud scene acquired by terrestrial laser scanning.</p>


Author(s):  
Hoang Long Nguyen ◽  
David Belton ◽  
Petra Helmholz

The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS) systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s). The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc.) that can be used as the inputs for many processing steps (e.g. registration, modelling) that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Guocheng Qin ◽  
Yin Zhou ◽  
Kaixin Hu ◽  
Daguang Han ◽  
Chunli Ying

Building information modeling (BIM) in industrialized bridge construction is usually performed based on initial design information. Differences exist between the model of the structure and its actual geometric dimensions and features due to the manufacturing, transportation, hoisting, assembly, and load bearing of the structure. These variations affect the construction project handover and facility management. The solutions available at present entail the use of point clouds to reconstruct BIM. However, these solutions still encounter problems, such as the inability to obtain the actual geometric features of a bridge quickly and accurately. Moreover, the created BIM is nonparametric and cannot be dynamically adjusted. This paper proposes a fully automatic method of reconstructing parameterized BIM by using point clouds to address the abovementioned problems. An algorithm for bridge point cloud segmentation is developed; the algorithm can separate the bridge point cloud from the entire scanning scene and segment the unit structure point cloud. Another algorithm for extracting the geometric features of the bridge point cloud is also proposed; this algorithm is effective for partially missing point clouds. The feasibility of the proposed method is evaluated and verified using theoretical and actual bridge point clouds, respectively. The reconstruction quality of BIM is also evaluated visually and quantitatively, and the results show that the reconstructed BIM is accurate and reliable.


Author(s):  
Y. Xu ◽  
R. Boerner ◽  
W. Yao ◽  
L. Hoegner ◽  
U. Stilla

For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.


2021 ◽  
Vol 2 ◽  
pp. 1-14
Author(s):  
Florian Politz ◽  
Monika Sester ◽  
Claus Brenner

Abstract. Detecting changes is an important task to update databases and find irregularities in spatial data. Every couple of years, national mapping agencies (NMAs) acquire nation-wide point cloud data from Airborne Laser Scanning (ALS) as well as from Dense Image Matching (DIM) using aerial images. Besides deriving several other products such as Digital Elevation Models (DEMs) from them, those point clouds also offer the chance to detect changes between two points in time on a large scale. Buildings are an important object class in the context of change detection to update cadastre data. As detecting changes manually is very time consuming, the aim of this study is to provide reliable change detections for different building sizes in order to support NMAs in their task to update their databases. As datasets of different times may have varying point densities due to technological advancements or different sensors, we propose a raster-based approach, which is independent of the point density altogether. Within a raster cell, our approach considers the height distribution of all points for two points in time by exploiting the Jensen-Shannon distance to measure their similarity. Our proposed method outperforms simple threshold methods on detecting building changes with respect to the same or different point cloud types. In combination with our proposed class change detection approach, we achieve a change detection performance measured by the mean F1-Score of about 71% between two ALS and about 60% between ALS and DIM point clouds acquired at different times.


Sign in / Sign up

Export Citation Format

Share Document