Pairwise registration of TLS point clouds by deep multi-scale local features

2020 ◽  
Vol 386 ◽  
pp. 232-243
Author(s):  
Wei Li ◽  
Cheng Wang ◽  
Chenglu Wen ◽  
Zheng Zhang ◽  
Congren Lin ◽  
...  
Sensors ◽  
2014 ◽  
Vol 14 (12) ◽  
pp. 24156-24173 ◽  
Author(s):  
Min Lu ◽  
Yulan Guo ◽  
Jun Zhang ◽  
Yanxin Ma ◽  
Yinjie Lei

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5574
Author(s):  
Qiang Zheng ◽  
Jian Sun

Fully exploring the correlation of local features and their spatial distribution in point clouds is essential for feature modeling. This paper, inspired by convolutional neural networks (CNNs), explores the relationship between local patterns and point coordinates from a novel perspective and proposes a lightweight structure based on multi-scale features and a two-step fusion strategy. Specifically, local features of multi-scales and their spatial distribution can be regarded as independent features corresponding to different levels of geometric significance, which are extracted by multiple parallel branches and then merged on multiple levels. In this way, the proposed model generates a shape-level representation that contains rich local characteristics and the spatial relationship between them. Moreover, with the shared multi-layer perceptrons (MLPs) as basic operators, the proposed structure is so concise that it converges rapidly, and so we introduce the snapshot ensemble to improve performance further. The model is evaluated on classification and part segmentation tasks. The experiments prove that our model achieves on-par or better performance than previous state-of-the-art (SOTA) methods.


2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3347 ◽  
Author(s):  
Zhishuang Yang ◽  
Bo Tan ◽  
Huikun Pei ◽  
Wanshou Jiang

The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.


2019 ◽  
Vol 98 ◽  
pp. 175-182 ◽  
Author(s):  
Jisoo Park ◽  
Pileun Kim ◽  
Yong K. Cho ◽  
Junsuk Kang

Sign in / Sign up

Export Citation Format

Share Document