scholarly journals Normal Estimation for 3D Point Clouds via Local Plane Constraint and Multi-scale Selection

2020 ◽  
Vol 129 ◽  
pp. 102916 ◽  
Author(s):  
Jun Zhou ◽  
Hua Huang ◽  
Bin Liu ◽  
Xiuping Liu
2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


Sensors ◽  
2014 ◽  
Vol 14 (12) ◽  
pp. 24156-24173 ◽  
Author(s):  
Min Lu ◽  
Yulan Guo ◽  
Jun Zhang ◽  
Yanxin Ma ◽  
Yinjie Lei

2017 ◽  
Vol 29 (5) ◽  
pp. 1209-1224 ◽  
Author(s):  
Baowei Lin ◽  
Fasheng Wang ◽  
Fangda Zhao ◽  
Yi Sun

Author(s):  
T. Wakita ◽  
J. Susaki

In this study, we propose a method to accurately extract vegetation from terrestrial three-dimensional (3D) point clouds for estimating landscape index in urban areas. Extraction of vegetation in urban areas is challenging because the light returned by vegetation does not show as clear patterns as man-made objects and because urban areas may have various objects to discriminate vegetation from. The proposed method takes a multi-scale voxel approach to effectively extract different types of vegetation in complex urban areas. With two different voxel sizes, a process is repeated that calculates the eigenvalues of the planar surface using a set of points, classifies voxels using the approximate curvature of the voxel of interest derived from the eigenvalues, and examines the connectivity of the valid voxels. We applied the proposed method to two data sets measured in a residential area in Kyoto, Japan. The validation results were acceptable, with F-measures of approximately 95% and 92%. It was also demonstrated that several types of vegetation were successfully extracted by the proposed method whereas the occluded vegetation were omitted. We conclude that the proposed method is suitable for extracting vegetation in urban areas from terrestrial light detection and ranging (LiDAR) data. In future, the proposed method will be applied to mobile LiDAR data and the performance of the method against lower density of point clouds will be examined.


2022 ◽  
Vol 41 (1) ◽  
pp. 1-21
Author(s):  
Chems-Eddine Himeur ◽  
Thibault Lejemble ◽  
Thomas Pellegrini ◽  
Mathias Paulin ◽  
Loic Barthe ◽  
...  

In recent years, Convolutional Neural Networks (CNN) have proven to be efficient analysis tools for processing point clouds, e.g., for reconstruction, segmentation, and classification. In this article, we focus on the classification of edges in point clouds, where both edges and their surrounding are described. We propose a new parameterization adding to each point a set of differential information on its surrounding shape reconstructed at different scales. These parameters, stored in a Scale-Space Matrix (SSM) , provide a well-suited information from which an adequate neural network can learn the description of edges and use it to efficiently detect them in acquired point clouds. After successfully applying a multi-scale CNN on SSMs for the efficient classification of edges and their neighborhood, we propose a new lightweight neural network architecture outperforming the CNN in learning time, processing time, and classification capabilities. Our architecture is compact, requires small learning sets, is very fast to train, and classifies millions of points in seconds.


Author(s):  
Xinhai Liu ◽  
Zhizhong Han ◽  
Yu-Shen Liu ◽  
Matthias Zwicker

Exploring contextual information in the local region is important for shape understanding and analysis. Existing studies often employ hand-crafted or explicit ways to encode contextual information of local regions. However, it is hard to capture fine-grained contextual information in hand-crafted or explicit manners, such as the correlation between different areas in a local region, which limits the discriminative ability of learned features. To resolve this issue, we propose a novel deep learning model for 3D point clouds, named Point2Sequence, to learn 3D shape features by capturing fine-grained contextual information in a novel implicit way. Point2Sequence employs a novel sequence learning model for point clouds to capture the correlations by aggregating multi-scale areas of each local region with attention. Specifically, Point2Sequence first learns the feature of each area scale in a local region. Then, it captures the correlation between area scales in the process of aggregating all area scales using a recurrent neural network (RNN) based encoder-decoder structure, where an attention mechanism is proposed to highlight the importance of different area scales. Experimental results show that Point2Sequence achieves state-of-the-art performance in shape classification and segmentation tasks.


2019 ◽  
Vol 11 (2) ◽  
pp. 198 ◽  
Author(s):  
Chunhua Hu ◽  
Zhou Pan ◽  
Pingping Li

Leaves are used extensively as an indicator in research on tree growth. Leaf area, as one of the most important index in leaf morphology, is also a comprehensive growth index for evaluating the effects of environmental factors. When scanning tree surfaces using a 3D laser scanner, the scanned point cloud data usually contain many outliers and noise. These outliers can be clusters or sparse points, whereas the noise is usually non-isolated but exhibits different attributes from valid points. In this study, a 3D point cloud filtering method for leaves based on manifold distance and normal estimation is proposed. First, leaf was extracted from the tree point cloud and initial clustering was performed as the preprocessing step. Second, outlier clusters filtering and outlier points filtering were successively performed using a manifold distance and truncation method. Third, noise points in each cluster were filtered based on the local surface normal estimation. The 3D reconstruction results of leaves after applying the proposed filtering method prove that this method outperforms other classic filtering methods. Comparisons of leaf areas with real values and area assessments of the mean absolute error (MAE) and mean absolute error percent (MAE%) for leaves in different levels were also conducted. The root mean square error (RMSE) for leaf area was 2.49 cm2. The MAE values for small leaves, medium leaves and large leaves were 0.92 cm2, 1.05 cm2 and 3.39 cm2, respectively, with corresponding MAE% values of 10.63, 4.83 and 3.8. These results demonstrate that the method proposed can be used to filter outliers and noise for 3D point clouds of leaves and improve 3D leaf visualization authenticity and leaf area measurement accuracy.


Author(s):  
Timo Hackel ◽  
Jan D. Wegner ◽  
Konrad Schindler

We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point’s (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.


Sign in / Sign up

Export Citation Format

Share Document