scholarly journals Comparison of belief propagation and graph-cut approaches for contextual classification of 3D lidar point cloud data

Author(s):  
L. Landrieu ◽  
C. Mallet ◽  
M. Weinmann
2020 ◽  
Vol 10 (13) ◽  
pp. 4486 ◽  
Author(s):  
Yongbeom Lee ◽  
Seongkeun Park

In this paper, we propose a deep learning-based perception method in autonomous driving systems using a Light Detection and Ranging(LiDAR) point cloud data, which is called a simultaneous segmentation and detection network (SSADNet). SSADNet can be used to recognize both drivable areas and obstacles, which is necessary for autonomous driving. Unlike the previous methods, where separate networks were needed for segmentation and detection, SSADNet can perform segmentation and detection simultaneously based on a single neural network. The proposed method uses point cloud data obtained from a 3D LiDAR for network input to generate a top view image consisting of three channels of distance, height, and reflection intensity. The structure of the proposed network includes a branch for segmentation and a branch for detection as well as a bridge connecting the two parts. The KITTI dataset, which is often used for experiments on autonomous driving, was used for training. The experimental results show that segmentation and detection can be performed simultaneously for drivable areas and vehicles at a quick inference speed, which is appropriate for autonomous driving systems.


2015 ◽  
Vol 7 (10) ◽  
pp. 12680-12703 ◽  
Author(s):  
Borja Rodríguez-Cuenca ◽  
Silverio García-Cortés ◽  
Celestino Ordóñez ◽  
Maria Alonso

2011 ◽  
Vol 32 (24) ◽  
pp. 9151-9169 ◽  
Author(s):  
Cici Alexander ◽  
Kevin Tansey ◽  
Jörg Kaduk ◽  
David Holland ◽  
Nicholas J. Tate

2018 ◽  
Vol 30 (4) ◽  
pp. 523-531 ◽  
Author(s):  
Yoshihiro Takita ◽  

This paper proposes a method for creating 3D occupancy grid maps using multi-layer 3D LIDAR and a swing mechanism termed Swing-LIDAR. The method using Swing-LIDAR can acquire 10 times more data at a stopping position than a method that does not use Swing-LIDAR. High-definition and accurate terrain information is obtained by a coordinate transformation of the acquired data compensated for by the measured orientation of the system. In this study, we develop a method to create 3D grid maps for autonomous robots using Swing-LIDAR. To validate the method, AR Skipper is run on the created maps that are used to obtain point cloud data without a swing mechanism, and 11 sets of each local map are combined. The experimental results exhibit the differences among the maps.


2020 ◽  
Vol 10 (3) ◽  
pp. 973 ◽  
Author(s):  
Hsien-I Lin ◽  
Mihn Cong Nguyen

Data imbalance during the training of deep networks can cause the network to skip directly to learning minority classes. This paper presents a novel framework by which to train segmentation networks using imbalanced point cloud data. PointNet, an early deep network used for the segmentation of point cloud data, proved effective in the point-wise classification of balanced data; however, performance degraded when imbalanced data was used. The proposed approach involves removing between-class data point imbalances and guiding the network to pay more attention to majority classes. Data imbalance is alleviated using a hybrid-sampling method involving oversampling, as well as undersampling, respectively, to decrease the amount of data in majority classes and increase the amount of data in minority classes. A balanced focus loss function is also used to emphasize the minority classes through the automated assignment of costs to the various classes based on their density in the point cloud. Experiments demonstrate the effectiveness of the proposed training framework when provided a point cloud dataset pertaining to six objects. The mean intersection over union (mIoU) test accuracy results obtained using PointNet training were as follows: XYZRGB data (91%) and XYZ data (86%). The mIoU test accuracy results obtained using the proposed scheme were as follows: XYZRGB data (98%) and XYZ data (93%).


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


Sign in / Sign up

Export Citation Format

Share Document