scholarly journals A New Framework For Interactive Segmentation of Point Clouds

Author(s):  
K. Liu ◽  
J. Boehm

Point cloud segmentation is a fundamental problem in point processing. Segmenting a point cloud fully automatically is very challenging due to the property of point cloud as well as different requirements of distinct users. In this paper, an interactive segmentation method for point clouds is proposed. Only two strokes need to be drawn intuitively to indicate the target object and the background respectively. The draw strokes are sparse and don't necessarily cover the whole object. Given the strokes, a weighted graph is built and the segmentation is formulated as a minimization problem. The problem is solved efficiently by using the Max Flow Min Cut algorithm. In the experiments, the mobile mapping data of a city area is utilized. The resulting segmentations demonstrate the efficiency of the method that can be potentially applied for general point clouds.

Author(s):  
C. Wen ◽  
S. Lin ◽  
C. Wang ◽  
J. Li

Point clouds acquired by RGB-D camera-based indoor mobile mapping system suffer the problems of being noisy, exhibiting an uneven distribution, and incompleteness, which are the problems that introduce difficulties for point cloud planar surface segmentation. This paper presents a novel color-enhanced hybrid planar surface segmentation model for RGB-D camera-based indoor mobile mapping point clouds based on region growing method, and the model specially addresses the planar surface extraction task over point cloud according to the noisy and incomplete indoor mobile mapping point clouds. The proposed model combines the color moments features with the curvature feature to select the seed points better. Additionally, a more robust growing criteria based on the hybrid features is developed to avoid the generation of excessive over-segmentation debris. A segmentation evaluation process with a small set of labeled segmented data is used to determine the optimal hybrid weight. Several comparative experiments were conducted to evaluate the segmentation model, and the experimental results demonstrate the effectiveness and efficiency of the proposed hybrid segmentation method for indoor mobile mapping three-dimensional (3D) point cloud data.


2020 ◽  
Vol 12 (18) ◽  
pp. 2923
Author(s):  
Tengfei Zhou ◽  
Xiaojun Cheng ◽  
Peng Lin ◽  
Zhenlun Wu ◽  
Ensheng Liu

Due to the existence of environmental or human factors, and because of the instrument itself, there are many uncertainties in point clouds, which directly affect the data quality and the accuracy of subsequent processing, such as point cloud segmentation, 3D modeling, etc. In this paper, to address this problem, stochastic information of point cloud coordinates is taken into account, and on the basis of the scanner observation principle within the Gauss–Helmert model, a novel general point-based self-calibration method is developed for terrestrial laser scanners, incorporating both five additional parameters and six exterior orientation parameters. For cases where the instrument accuracy is different from the nominal ones, the variance component estimation algorithm is implemented for reweighting the outliers after the residual errors of observations obtained. Considering that the proposed method essentially is a nonlinear model, the Gauss–Newton iteration method is applied to derive the solutions of additional parameters and exterior orientation parameters. We conducted experiments using simulated and real data and compared them with those two existing methods. The experimental results showed that the proposed method could improve the point accuracy from 10−4 to 10−8 (a priori known) and 10−7 (a priori unknown), and reduced the correlation among the parameters (approximately 60% of volume). However, it is undeniable that some correlations increased instead, which is the limitation of the general method.


2019 ◽  
Vol 11 (23) ◽  
pp. 2727 ◽  
Author(s):  
Ming Huang ◽  
Pengcheng Wei ◽  
Xianglei Liu

Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency and poor segmentation effect. Hence, an efficient encoding voxel-based segmentation (EVBS) algorithm based on a fast adjacent voxel search is proposed in this study. First, a binary octree algorithm is proposed to construct the voxel as the segmentation object and code the voxel, which can compute voxel features quickly and accurately. Second, a voxel-based region growing algorithm is proposed to cluster the corresponding voxel to perform the initial point cloud segmentation, which can improve the rationality of seed selection. Finally, a refining point method is proposed to solve the problem of under-segmentation in unlabeled voxels by judging the relationship between the points and the segmented plane. Experimental results demonstrate that the proposed algorithm is better than the traditional algorithm in terms of computation time, extraction accuracy, and recall rate.


2014 ◽  
Vol 513-517 ◽  
pp. 4193-4196
Author(s):  
Wen Bao Qiao ◽  
Ming Guo ◽  
Jun Jie Liu

In this paper, we propose an efficient way to produce an initial transposed matrix for two point clouds, which can effectively avoid the drawback of local optimism caused by using standard Iterative Closest Points (ICP)[ algorithm when matching two point clouds. In our approach, the correspondences used to calculate the transposed matrix are confirmed before the point cloud forms. We use the depth images which have been carefully target-segmented to find the boundaries of the shapes that reflect different views of the same target object. Then each contour is affected by curvature scale space (CSS)[ method to find a sequence of characteristic points. After that, our method is applied on these characteristic points to find the most matching pairs of points. Finally, we convert the matched characteristic points to 3D points, and the correspondences are there being confirmed. We can use them to compute an initial transposed matrix to tell the computer which part of the first point cloud should be matched to the second. In this way, we put the two point clouds in a correct initial location, so that the local optimism of ICP and its variations can be excluded.


2020 ◽  
Vol 37 (6) ◽  
pp. 1019-1027
Author(s):  
Ali Saglam ◽  
Hasan B. Makineci ◽  
Ömer K. Baykan ◽  
Nurdan Akhan Baykan

Point cloud processing is a struggled field because the points in the clouds are three-dimensional and irregular distributed signals. For this reason, the points in the point clouds are mostly sampled into regularly distributed voxels in the literature. Voxelization as a pretreatment significantly accelerates the process of segmenting surfaces. The geometric cues such as plane directions (normals) in the voxels are mostly used to segment the local surfaces. However, the sampling process may include a non-planar point group (patch), which is mostly on the edges and corners, in a voxel. These voxels can cause misleading the segmentation process. In this paper, we separate the non-planar patches into planar sub-patches using k-means clustering. The largest one among the planar sub-patches replaces the normal and barycenter properties of the voxel with those of itself. We have tested this process in a successful point cloud segmentation method and measure the effects of the proposed method on two point cloud segmentation datasets (Mosque and Train Station). The method increases the accuracy success of the Mosque dataset from 83.84% to 87.86% and that of the Train Station dataset from 85.36% to 87.07%.


Author(s):  
S. Hofmann ◽  
C. Brenner

Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. <br><br> Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.


Author(s):  
M. Bassier ◽  
M. Bonduel ◽  
B. Van Genechten ◽  
M. Vergauwen

Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent.<br><br> In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Liang Gong ◽  
Xiaofeng Du ◽  
Kai Zhu ◽  
Ke Lin ◽  
Qiaojun Lou ◽  
...  

The automated measurement of crop phenotypic parameters is of great significance to the quantitative study of crop growth. The segmentation and classification of crop point cloud help to realize the automation of crop phenotypic parameter measurement. At present, crop spike-shaped point cloud segmentation has problems such as fewer samples, uneven distribution of point clouds, occlusion of stem and spike, disorderly arrangement of point clouds, and lack of targeted network models. The traditional clustering method can realize the segmentation of the plant organ point cloud with relatively independent spatial location, but the accuracy is not acceptable. This paper first builds a desktop-level point cloud scanning apparatus based on a structured-light projection module to facilitate the point cloud acquisition process. Then, the rice ear point cloud was collected, and the rice ear point cloud data set was made. In addition, data argumentation is used to improve sample utilization efficiency and training accuracy. Finally, a 3D point cloud convolutional neural network model called Panicle-3D was designed to achieve better segmentation accuracy. Specifically, the design of Panicle-3D is aimed at the multiscale characteristics of plant organs, combined with the structure of PointConv and long and short jumps, which accelerates the convergence speed of the network and reduces the loss of features in the process of point cloud downsampling. After comparison experiments, the segmentation accuracy of Panicle-3D reaches 93.4%, which is higher than PointNet. Panicle-3D is suitable for other similar crop point cloud segmentation tasks.


Author(s):  
T. Yamakawa ◽  
K. Fukano ◽  
R. Onodera ◽  
H. Masuda

Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.


2021 ◽  
Vol 8 (2) ◽  
pp. 303-315
Author(s):  
Jingyu Gong ◽  
Zhou Ye ◽  
Lizhuang Ma

AbstractA significant performance boost has been achieved in point cloud semantic segmentation by utilization of the encoder-decoder architecture and novel convolution operations for point clouds. However, co-occurrence relationships within a local region which can directly influence segmentation results are usually ignored by current works. In this paper, we propose a neighborhood co-occurrence matrix (NCM) to model local co-occurrence relationships in a point cloud. We generate target NCM and prediction NCM from semantic labels and a prediction map respectively. Then, Kullback-Leibler (KL) divergence is used to maximize the similarity between the target and prediction NCMs to learn the co-occurrence relationship. Moreover, for large scenes where the NCMs for a sampled point cloud and the whole scene differ greatly, we introduce a reverse form of KL divergence which can better handle the difference to supervise the prediction NCMs. We integrate our method into an existing backbone and conduct comprehensive experiments on three datasets: Semantic3D for outdoor space segmentation, and S3DIS and ScanNet v2 for indoor scene segmentation. Results indicate that our method can significantly improve upon the backbone and outperform many leading competitors.


Sign in / Sign up

Export Citation Format

Share Document