scholarly journals PROBABILITY DENSITY BASED CLASSIFICATION AND RECONSTRUCTION OF ROOF STRUCTURES FROM 3D POINT CLOUDS

Author(s):  
Y. Dehbi ◽  
S. Koppers ◽  
L. Plümer

Abstract. 3D building models including roofs are a key prerequisite in many fields of applications such as the estimation of solar suitability of rooftops. The accurate reconstruction of roofs with dormers is sometimes challenging. Without careful separation of the dormer points from the points on the roof surface, the estimation of the roof areas is distorted in a most characteristic way, which then let the dormer points appear as white noise. The characteristic distortion of the density distribution of the defects by dormers in comparison to the expected normal distribution is the starting point of our method. We propose a hierarchical method which improves roof reconstruction from LiDAR point clouds in a model-based manner separating dormer points from roof points using classification methods. The key idea is to exploit probability density functions (PDFs) to reveal roof properties and design skilful features for a supervised learning method using support vector machines (SVMs). Properties of the PDFs of measures such as residuals of model-based estimated roof models are used among others. A clustering step leads to a semantic segmentation of the point cloud enabling subsequent reconstruction. The approach is tested based on real data as well as simulated point clouds. The latter allow for experiments for various roof and dormer types with different parameters using an implemented simulation toolbox which generates virtual buildings and synthetic point clouds.

2021 ◽  
Vol 5 (1) ◽  
pp. 59
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

Terrestrial laser scanners (TLS) capture a large number of 3D points rapidly, with high precision and spatial resolution. These scanners are used for applications as diverse as modeling architectural or engineering structures, but also high-resolution mapping of terrain. The noise of the observations cannot be assumed to be strictly corresponding to white noise: besides being heteroscedastic, correlations between observations are likely to appear due to the high scanning rate. Unfortunately, if the variance can sometimes be modeled based on physical or empirical considerations, the latter are more often neglected. Trustworthy knowledge is, however, mandatory to avoid the overestimation of the precision of the point cloud and, potentially, the non-detection of deformation between scans recorded at different epochs using statistical testing strategies. The TLS point clouds can be approximated with parametric surfaces, such as planes, using the Gauss–Helmert model, or the newly introduced T-splines surfaces. In both cases, the goal is to minimize the squared distance between the observations and the approximated surfaces in order to estimate parameters, such as normal vector or control points. In this contribution, we will show how the residuals of the surface approximation can be used to derive the correlation structure of the noise of the observations. We will estimate the correlation parameters using the Whittle maximum likelihood and use comparable simulations and real data to validate our methodology. Using the least-squares adjustment as a “filter of the geometry” paves the way for the determination of a correlation model for many sensors recording 3D point clouds.


2021 ◽  
Vol 10 (5) ◽  
pp. 345
Author(s):  
Konstantinos Chaidas ◽  
George Tataris ◽  
Nikolaos Soulakellis

In a post-earthquake scenario, the semantic enrichment of 3D building models with seismic damage is crucial from the perspective of disaster management. This paper aims to present the methodology and the results for the Level of Detail 3 (LOD3) building modelling (after an earthquake) with the enrichment of the semantics of the seismic damage based on the European Macroseismic Scale (EMS-98). The study area is the Vrisa traditional settlement on the island of Lesvos, Greece, which was affected by a devastating earthquake of Mw = 6.3 on 12 June 2017. The applied methodology consists of the following steps: (a) unmanned aircraft systems (UAS) nadir and oblique images are acquired and photogrammetrically processed for 3D point cloud generation, (b) 3D building models are created based on 3D point clouds and (c) 3D building models are transformed into a LOD3 City Geography Markup Language (CityGML) standard with enriched semantics of the related seismic damage of every part of the building (walls, roof, etc.). The results show that in following this methodology, CityGML LOD3 models can be generated and enriched with buildings’ seismic damage. These models can assist in the decision-making process during the recovery phase of a settlement as well as be the basis for its monitoring over time. Finally, these models can contribute to the estimation of the reconstruction cost of the buildings.


2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4523 ◽  
Author(s):  
Carlos Cabo ◽  
Celestino Ordóñez ◽  
Fernando Sáchez-Lasheras ◽  
Javier Roca-Pardiñas ◽  
and Javier de Cos-Juez

We analyze the utility of multiscale supervised classification algorithms for object detection and extraction from laser scanning or photogrammetric point clouds. Only the geometric information (the point coordinates) was considered, thus making the method independent of the systems used to collect the data. A maximum of five features (input variables) was used, four of them related to the eigenvalues obtained from a principal component analysis (PCA). PCA was carried out at six scales, defined by the diameter of a sphere around each observation. Four multiclass supervised classification models were tested (linear discriminant analysis, logistic regression, support vector machines, and random forest) in two different scenarios, urban and forest, formed by artificial and natural objects, respectively. The results obtained were accurate (overall accuracy over 80% for the urban dataset, and over 93% for the forest dataset), in the range of the best results found in the literature, regardless of the classification method. For both datasets, the random forest algorithm provided the best solution/results when discrimination capacity, computing time, and the ability to estimate the relative importance of each variable are considered together.


2010 ◽  
Vol 39 ◽  
pp. 247-252
Author(s):  
Sheng Xu ◽  
Zhi Juan Wang ◽  
Hui Fang Zhao

A two-stage neural network architecture constructed by combining potential support vector machines (P-SVM) with genetic algorithm (GA) and gray correlation coefficient analysis (GCCA) is proposed for patent innovation factors evolution. The enterprises patent innovation is complex to conduct due to its nonlinearity of influenced factors. It is necessary to make a trade off among these factors when some of them conflict firstly. A novel way about nonlinear regression model with the potential support vector machines (P-SVM) is presented in this paper. In the model development, the genetic algorithm is employed to optimize P-SVM parameters selection. After the selected key factors by the PSVM with GA model, the main factors that affect patent innovation generation have been quantitatively studied using the method of gray correlation coefficient analysis. Using a set of real data in China, the results show that the methods developed in this paper can provide valuable information for patent innovation management and related municipal planning projects.


Author(s):  
M. Kölle ◽  
V. Walter ◽  
S. Schmohl ◽  
U. Soergel

Abstract. Automated semantic interpretation of 3D point clouds is crucial for many tasks in the domain of geospatial data analysis. For this purpose, labeled training data is required, which has often to be provided manually by experts. One approach to minimize effort in terms of costs of human interaction is Active Learning (AL). The aim is to process only the subset of an unlabeled dataset that is particularly helpful with respect to class separation. Here a machine identifies informative instances which are then labeled by humans, thereby increasing the performance of the machine. In order to completely avoid involvement of an expert, this time-consuming annotation can be resolved via crowdsourcing. Therefore, we propose an approach combining AL with paid crowdsourcing. Although incorporating human interaction, our method can run fully automatically, so that only an unlabeled dataset and a fixed financial budget for the payment of the crowdworkers need to be provided. We conduct multiple iteration steps of the AL process on the ISPRS Vaihingen 3D Semantic Labeling benchmark dataset (V3D) and especially evaluate the performance of the crowd when labeling 3D points. We prove our concept by using labels derived from our crowd-based AL method for classifying the test dataset. The analysis outlines that by labeling only 0:4% of the training dataset by the crowd and spending less than 145 $, both our trained Random Forest and sparse 3D CNN classifier differ in Overall Accuracy by less than 3 percentage points compared to the same classifiers trained on the complete V3D training set.


Author(s):  
Hsien-Chung Lin ◽  
Eugen Solowjow ◽  
Masayoshi Tomizuka ◽  
Edwin Kreuzer

This contribution presents a method to estimate environmental boundaries with mobile agents. The agents sample a concentration field of interest at their respective positions and infer a level curve of the unknown field. The presented method is based on support vector machines (SVMs), whereby the concentration level of interest serves as the decision boundary. The field itself does not have to be estimated in order to obtain the level curve which makes the method computationally very appealing. A myopic strategy is developed to pick locations that yield most informative concentration measurements. Cooperative operations of multiple agents are demonstrated by dividing the domain in Voronoi tessellations. Numerical studies demonstrate the feasibility of the method on a real data set of the California coastal area. The exploration strategy is benchmarked against random walk which it clearly outperforms.


Sign in / Sign up

Export Citation Format

Share Document