Supervoxel-based extraction and classification of pole-like objects from MLS point cloud data

2022 ◽  
Vol 146 ◽  
pp. 107562
Author(s):  
Jintao Li ◽  
Xiaojun Cheng
2015 ◽  
Vol 7 (10) ◽  
pp. 12680-12703 ◽  
Author(s):  
Borja Rodríguez-Cuenca ◽  
Silverio García-Cortés ◽  
Celestino Ordóñez ◽  
Maria Alonso

2011 ◽  
Vol 32 (24) ◽  
pp. 9151-9169 ◽  
Author(s):  
Cici Alexander ◽  
Kevin Tansey ◽  
Jörg Kaduk ◽  
David Holland ◽  
Nicholas J. Tate

2020 ◽  
Vol 10 (3) ◽  
pp. 973 ◽  
Author(s):  
Hsien-I Lin ◽  
Mihn Cong Nguyen

Data imbalance during the training of deep networks can cause the network to skip directly to learning minority classes. This paper presents a novel framework by which to train segmentation networks using imbalanced point cloud data. PointNet, an early deep network used for the segmentation of point cloud data, proved effective in the point-wise classification of balanced data; however, performance degraded when imbalanced data was used. The proposed approach involves removing between-class data point imbalances and guiding the network to pay more attention to majority classes. Data imbalance is alleviated using a hybrid-sampling method involving oversampling, as well as undersampling, respectively, to decrease the amount of data in majority classes and increase the amount of data in minority classes. A balanced focus loss function is also used to emphasize the minority classes through the automated assignment of costs to the various classes based on their density in the point cloud. Experiments demonstrate the effectiveness of the proposed training framework when provided a point cloud dataset pertaining to six objects. The mean intersection over union (mIoU) test accuracy results obtained using PointNet training were as follows: XYZRGB data (91%) and XYZ data (86%). The mIoU test accuracy results obtained using the proposed scheme were as follows: XYZRGB data (98%) and XYZ data (93%).


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


2019 ◽  
Vol 39 (2) ◽  
pp. 0228001 ◽  
Author(s):  
杨书娟 Yang Shujuan ◽  
张珂殊 Zhang Keshu ◽  
邵永社 Shao Yongshe

Sign in / Sign up

Export Citation Format

Share Document