scholarly journals VOXEL- AND GRAPH-BASED POINT CLOUD SEGMENTATION OF 3D SCENES USING PERCEPTUAL GROUPING LAWS

Author(s):  
Y. Xu ◽  
L. Hoegner ◽  
S. Tuttas ◽  
U. Stilla

Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2731
Author(s):  
Yunbo Rao ◽  
Menghan Zhang ◽  
Zhanglin Cheng ◽  
Junmin Xue ◽  
Jiansu Pu ◽  
...  

Accurate segmentation of entity categories is the critical step for 3D scene understanding. This paper presents a fast deep neural network model with Dense Conditional Random Field (DCRF) as a post-processing method, which can perform accurate semantic segmentation for 3D point cloud scene. On this basis, a compact but flexible framework is introduced for performing segmentation to the semantics of point clouds concurrently, contribute to more precise segmentation. Moreover, based on semantics labels, a novel DCRF model is elaborated to refine the result of segmentation. Besides, without any sacrifice to accuracy, we apply optimization to the original data of the point cloud, allowing the network to handle fewer data. In the experiment, our proposed method is conducted comprehensively through four evaluation indicators, proving the superiority of our method.


2020 ◽  
Vol 12 (3) ◽  
pp. 543 ◽  
Author(s):  
Małgorzata Jarząbek-Rychard ◽  
Dong Lin ◽  
Hans-Gerd Maas

Targeted energy management and control is becoming an increasing concern in the building sector. Automatic analyses of thermal data, which minimize the subjectivity of the assessment and allow for large-scale inspections, are therefore of high interest. In this study, we propose an approach for a supervised extraction of façade openings (windows and doors) from photogrammetric 3D point clouds attributed to RGB and thermal infrared (TIR) information. The novelty of the proposed approach is in the combination of thermal information with other available characteristics of data for a classification performed directly in 3D space. Images acquired in visible and thermal infrared spectra serve as input data for the camera pose estimation and the reconstruction of 3D scene geometry. To investigate the relevance of different information types to the classification performance, a Random Forest algorithm is applied to various sets of computed features. The best feature combination is then used as an input for a Conditional Random Field that enables us to incorporate contextual information and consider the interaction between the points. The evaluation executed on a per-point level shows that the fusion of all available information types together with context consideration allows us to extract objects with 90% completeness and 95% correctness. A respective assessment executed on a per-object level shows 97% completeness and 88% accuracy.


2019 ◽  
Vol 11 (23) ◽  
pp. 2727 ◽  
Author(s):  
Ming Huang ◽  
Pengcheng Wei ◽  
Xianglei Liu

Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency and poor segmentation effect. Hence, an efficient encoding voxel-based segmentation (EVBS) algorithm based on a fast adjacent voxel search is proposed in this study. First, a binary octree algorithm is proposed to construct the voxel as the segmentation object and code the voxel, which can compute voxel features quickly and accurately. Second, a voxel-based region growing algorithm is proposed to cluster the corresponding voxel to perform the initial point cloud segmentation, which can improve the rationality of seed selection. Finally, a refining point method is proposed to solve the problem of under-segmentation in unlabeled voxels by judging the relationship between the points and the segmented plane. Experimental results demonstrate that the proposed algorithm is better than the traditional algorithm in terms of computation time, extraction accuracy, and recall rate.


2020 ◽  
pp. 002029402091986
Author(s):  
Xiaocui Yuan ◽  
Huawei Chen ◽  
Baoling Liu

Clustering analysis is one of the most important techniques in point cloud processing, such as registration, segmentation, and outlier detection. However, most of the existing clustering algorithms exhibit a low computational efficiency with the high demand for computational resources, especially for large data processing. Sometimes, clusters and outliers are inseparable, especially for those point clouds with outliers. Most of the cluster-based algorithms can well identify cluster outliers but sparse outliers. We develop a novel clustering method, called spatial neighborhood connected region labeling. The method defines spatial connectivity criterion, finds points connections based on the connectivity criterion among the k-nearest neighborhood region and classifies connected points to the same cluster. Our method can accurately and quickly classify datasets using only one parameter k. Comparing with K-means, hierarchical clustering and density-based spatial clustering of applications with noise methods, our method provides better accuracy using less computational time for data clustering. For applications in the outlier detection of the point cloud, our method can identify not only cluster outliers, but also sparse outliers. More accurate detection results are achieved compared to the state-of-art outlier detection methods, such as local outlier factor and density-based spatial clustering of applications with noise.


Author(s):  
K. Liu ◽  
J. Boehm

Point cloud segmentation is a fundamental problem in point processing. Segmenting a point cloud fully automatically is very challenging due to the property of point cloud as well as different requirements of distinct users. In this paper, an interactive segmentation method for point clouds is proposed. Only two strokes need to be drawn intuitively to indicate the target object and the background respectively. The draw strokes are sparse and don't necessarily cover the whole object. Given the strokes, a weighted graph is built and the segmentation is formulated as a minimization problem. The problem is solved efficiently by using the Max Flow Min Cut algorithm. In the experiments, the mobile mapping data of a city area is utilized. The resulting segmentations demonstrate the efficiency of the method that can be potentially applied for general point clouds.


2020 ◽  
Vol 37 (6) ◽  
pp. 1019-1027
Author(s):  
Ali Saglam ◽  
Hasan B. Makineci ◽  
Ömer K. Baykan ◽  
Nurdan Akhan Baykan

Point cloud processing is a struggled field because the points in the clouds are three-dimensional and irregular distributed signals. For this reason, the points in the point clouds are mostly sampled into regularly distributed voxels in the literature. Voxelization as a pretreatment significantly accelerates the process of segmenting surfaces. The geometric cues such as plane directions (normals) in the voxels are mostly used to segment the local surfaces. However, the sampling process may include a non-planar point group (patch), which is mostly on the edges and corners, in a voxel. These voxels can cause misleading the segmentation process. In this paper, we separate the non-planar patches into planar sub-patches using k-means clustering. The largest one among the planar sub-patches replaces the normal and barycenter properties of the voxel with those of itself. We have tested this process in a successful point cloud segmentation method and measure the effects of the proposed method on two point cloud segmentation datasets (Mosque and Train Station). The method increases the accuracy success of the Mosque dataset from 83.84% to 87.86% and that of the Train Station dataset from 85.36% to 87.07%.


Author(s):  
M. Bassier ◽  
M. Bonduel ◽  
B. Van Genechten ◽  
M. Vergauwen

Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent.<br><br> In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Liang Gong ◽  
Xiaofeng Du ◽  
Kai Zhu ◽  
Ke Lin ◽  
Qiaojun Lou ◽  
...  

The automated measurement of crop phenotypic parameters is of great significance to the quantitative study of crop growth. The segmentation and classification of crop point cloud help to realize the automation of crop phenotypic parameter measurement. At present, crop spike-shaped point cloud segmentation has problems such as fewer samples, uneven distribution of point clouds, occlusion of stem and spike, disorderly arrangement of point clouds, and lack of targeted network models. The traditional clustering method can realize the segmentation of the plant organ point cloud with relatively independent spatial location, but the accuracy is not acceptable. This paper first builds a desktop-level point cloud scanning apparatus based on a structured-light projection module to facilitate the point cloud acquisition process. Then, the rice ear point cloud was collected, and the rice ear point cloud data set was made. In addition, data argumentation is used to improve sample utilization efficiency and training accuracy. Finally, a 3D point cloud convolutional neural network model called Panicle-3D was designed to achieve better segmentation accuracy. Specifically, the design of Panicle-3D is aimed at the multiscale characteristics of plant organs, combined with the structure of PointConv and long and short jumps, which accelerates the convergence speed of the network and reduces the loss of features in the process of point cloud downsampling. After comparison experiments, the segmentation accuracy of Panicle-3D reaches 93.4%, which is higher than PointNet. Panicle-3D is suitable for other similar crop point cloud segmentation tasks.


2021 ◽  
Vol 8 (2) ◽  
pp. 303-315
Author(s):  
Jingyu Gong ◽  
Zhou Ye ◽  
Lizhuang Ma

AbstractA significant performance boost has been achieved in point cloud semantic segmentation by utilization of the encoder-decoder architecture and novel convolution operations for point clouds. However, co-occurrence relationships within a local region which can directly influence segmentation results are usually ignored by current works. In this paper, we propose a neighborhood co-occurrence matrix (NCM) to model local co-occurrence relationships in a point cloud. We generate target NCM and prediction NCM from semantic labels and a prediction map respectively. Then, Kullback-Leibler (KL) divergence is used to maximize the similarity between the target and prediction NCMs to learn the co-occurrence relationship. Moreover, for large scenes where the NCMs for a sampled point cloud and the whole scene differ greatly, we introduce a reverse form of KL divergence which can better handle the difference to supervise the prediction NCMs. We integrate our method into an existing backbone and conduct comprehensive experiments on three datasets: Semantic3D for outdoor space segmentation, and S3DIS and ScanNet v2 for indoor scene segmentation. Results indicate that our method can significantly improve upon the backbone and outperform many leading competitors.


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


Sign in / Sign up

Export Citation Format

Share Document