scholarly journals Unsupervised Building Instance Segmentation of Airborne LiDAR Point Clouds for Parallel Reconstruction Analysis

2021 ◽  
Vol 13 (6) ◽  
pp. 1136
Author(s):  
Yongjun Zhang ◽  
Wangshan Yang ◽  
Xinyi Liu ◽  
Yi Wan ◽  
Xianzhang Zhu ◽  
...  

Efficient building instance segmentation is necessary for many applications such as parallel reconstruction, management and analysis. However, most of the existing instance segmentation methods still suffer from low completeness, low correctness and low quality for building instance segmentation, which are especially obvious for complex building scenes. This paper proposes a novel unsupervised building instance segmentation (UBIS) method of airborne Light Detection and Ranging (LiDAR) point clouds for parallel reconstruction analysis, which combines a clustering algorithm and a novel model consistency evaluation method. The proposed method first divides building point clouds into building instances by the improved kd tree 2D shared nearest neighbor clustering algorithm (Ikd-2DSNN). Then, the geometric feature of the building instance is obtained using the model consistency evaluation method, which is used to determine whether the building instance is a single building instance or a multi-building instance. Finally, for multiple building instances, the improved kd tree 3D shared nearest neighbor clustering algorithm (Ikd-3DSNN) is used to divide multi-building instances again to improve the accuracy of building instance segmentation. Our experimental results demonstrate that the proposed UBIS method obtained good performances for various buildings in different scenes such as high-rise building, podium buildings and a residential area with detached houses. A comparative analysis confirms that the proposed UBIS method performed better than state-of-the-art methods.

Author(s):  
P. Polewski ◽  
W. Yao ◽  
M. Heurich ◽  
P. Krzystek ◽  
U. Stilla

Fallen trees participate in several important forest processes, which motivates the need for information about their spatial distribution in forest ecosystems. Several studies have shown that airborne LiDAR is a valuable tool for obtaining such information. In this paper, we propose an integrated method of detecting fallen trees from ALS point clouds based on merging small segments into entire fallen stems via the Normalized Cut algorithm. A new approach to specifying the segment similarity function for the clustering algorithm is introduced, where the attribute weights are learned from labeled data instead of being determined manually. We notice the relationship between Normalized Cut’s similarity function and a class of regression models, which leads us to the idea of approximating the task of learning the similarity function with the simpler task of learning a classifier. Moreover, we set up a virtual fallen tree generation scheme to simulate complex forest scenarios with multiple overlapping fallen stems. The classifier trained on this simulated data yields a similarity function for Normalized Cut. Tests on two sample plots from the Bavarian Forest National Park with manually labeled reference data show that the trained function leads to high-quality segmentations. Our results indicate that the proposed data-driven approach can be a successful alternative to time consuming trial-and-error or grid search methods of finding good feature weights for graph cut algorithms. Also, the methodology can be generalized to other applications of graph cut clustering in remote sensing.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2014
Author(s):  
Yi Lv ◽  
Mandan Liu ◽  
Yue Xiang

The clustering analysis algorithm is used to reveal the internal relationships among the data without prior knowledge and to further gather some data with common attributes into a group. In order to solve the problem that the existing algorithms always need prior knowledge, we proposed a fast searching density peak clustering algorithm based on the shared nearest neighbor and adaptive clustering center (DPC-SNNACC) algorithm. It can automatically ascertain the number of knee points in the decision graph according to the characteristics of different datasets, and further determine the number of clustering centers without human intervention. First, an improved calculation method of local density based on the symmetric distance matrix was proposed. Then, the position of knee point was obtained by calculating the change in the difference between decision values. Finally, the experimental and comparative evaluation of several datasets from diverse domains established the viability of the DPC-SNNACC algorithm.


2011 ◽  
Vol 145 ◽  
pp. 189-193 ◽  
Author(s):  
Horng Lin Shieh

In this paper, a hybrid method combining rough set and shared nearest neighbor algorithms is proposed for data clustering with non-globular shapes. The roughk-means algorithm is based on the distances between data and cluster centers. It partitions a data set with globular shapes well, but when the data are non-globular shapes, the results obtained by a roughk-means algorithm are not very satisfactory. In order to resolve this problem, a combined rough set and shared nearest neighbor algorithm is proposed. The proposed algorithm first adopts a shared nearest neighbor algorithm to evaluate the similarity among data, then the lower and upper approximations of a rough set algorithm are used to partition the data set into clusters.


Author(s):  
J. Niemeyer ◽  
F. Rottensteiner ◽  
U. Soergel ◽  
C. Heipke

In this investigation, we address the task of airborne LiDAR point cloud labelling for urban areas by presenting a contextual classification methodology based on a Conditional Random Field (CRF). A two-stage CRF is set up: in a first step, a point-based CRF is applied. The resulting labellings are then used to generate a segmentation of the classified points using a Conditional Euclidean Clustering algorithm. This algorithm combines neighbouring points with the same object label into one segment. The second step comprises the classification of these segments, again with a CRF. As the number of the segments is much smaller than the number of points, it is computationally feasible to integrate long range interactions into this framework. Additionally, two different types of interactions are introduced: one for the local neighbourhood and another one operating on a coarser scale. <br><br> This paper presents the entire processing chain. We show preliminary results achieved using the Vaihingen LiDAR dataset from the ISPRS Benchmark on Urban Classification and 3D Reconstruction, which consists of three test areas characterised by different and challenging conditions. The utilised classification features are described, and the advantages and remaining problems of our approach are discussed. We also compare our results to those generated by a point-based classification and show that a slight improvement is obtained with this first implementation.


2015 ◽  
Vol 11 (3) ◽  
pp. 26-48 ◽  
Author(s):  
Guilherme Moreira ◽  
Maribel Yasmina Santos ◽  
João Moura Pires ◽  
João Galvão

Huge amounts of data are available for analysis in nowadays organizations, which are facing several challenges when trying to analyze the generated data with the aim of extracting useful information. This analytical capability needs to be enhanced with tools capable of dealing with big data sets without making the analytical process an arduous task. Clustering is usually used in the data analysis process, as this technique does not require any prior knowledge about the data. However, clustering algorithms usually require one or more input parameters that influence the clustering process and the results that can be obtained. This work analyses the relation between the three input parameters of the SNN (Shared Nearest Neighbor) clustering algorithm, providing a comprehensive understanding of the relationships that were identified between k, Eps and MinPts, the algorithm's input parameters. Moreover, this work also proposes specific guidelines for the definition of the appropriate input parameters, optimizing the processing time, as the number of trials needed to achieve appropriate results can be substantial reduced.


Sign in / Sign up

Export Citation Format

Share Document