scholarly journals PEMCNet: An Efficient Multi-Scale Point Feature Fusion Network for 3D LiDAR Point Cloud Classification

2021 ◽  
Vol 13 (21) ◽  
pp. 4312
Author(s):  
Genping Zhao ◽  
Weiguang Zhang ◽  
Yeping Peng ◽  
Heng Wu ◽  
Zhuowei Wang ◽  
...  

Point cloud classification plays a significant role in Light Detection and Ranging (LiDAR) applications. However, most available multi-scale feature learning networks for large-scale 3D LiDAR point cloud classification tasks are time-consuming. In this paper, an efficient deep neural architecture denoted as Point Expanded Multi-scale Convolutional Network (PEMCNet) is developed to accurately classify the 3D LiDAR point cloud. Different from traditional networks for point cloud processing, PEMCNet includes successive Point Expanded Grouping (PEG) units and Absolute and Relative Spatial Embedding (ARSE) units for representative point feature learning. The PEG unit enables us to progressively increase the receptive field for each observed point and aggregate the feature of a point cloud at different scales but without increasing computation. The ARSE unit following the PEG unit furthermore realizes representative encoding of points relationship, which effectively preserves the geometric details between points. We evaluate our method on both public datasets (the Urban Semantic 3D (US3D) dataset and Semantic3D benchmark dataset) and our new collected Unmanned Aerial Vehicle (UAV) based LiDAR point cloud data of the campus of Guangdong University of Technology. In comparison with four available state-of-the-art methods, our methods ranked first place regarding both efficiency and accuracy. It was observed on the public datasets that with a 2% increase in classification accuracy, over 26% improvement of efficiency was achieved at the same time compared to the second efficient method. Its potential value is also tested on the newly collected point cloud data with over 91% of classification accuracy and 154 ms of processing time.

2021 ◽  
Vol 13 (16) ◽  
pp. 3156
Author(s):  
Yong Li ◽  
Yinzheng Luo ◽  
Xia Gu ◽  
Dong Chen ◽  
Fang Gao ◽  
...  

Point cloud classification is a key technology for point cloud applications and point cloud feature extraction is a key step towards achieving point cloud classification. Although there are many point cloud feature extraction and classification methods, and the acquisition of colored point cloud data has become easier in recent years, most point cloud processing algorithms do not consider the color information associated with the point cloud or do not make full use of the color information. Therefore, we propose a voxel-based local feature descriptor according to the voxel-based local binary pattern (VLBP) and fuses point cloud RGB information and geometric structure features using a random forest classifier to build a color point cloud classification algorithm. The proposed algorithm voxelizes the point cloud; divides the neighborhood of the center point into cubes (i.e., multiple adjacent sub-voxels); compares the gray information of the voxel center and adjacent sub-voxels; performs voxel global thresholding to convert it into a binary code; and uses a local difference sign–magnitude transform (LDSMT) to decompose the local difference of an entire voxel into two complementary components of sign and magnitude. Then, the VLBP feature of each point is extracted. To obtain more structural information about the point cloud, the proposed method extracts the normal vector of each point and the corresponding fast point feature histogram (FPFH) based on the normal vector. Finally, the geometric mechanism features (normal vector and FPFH) and color features (RGB and VLBP features) of the point cloud are fused, and a random forest classifier is used to classify the color laser point cloud. The experimental results show that the proposed algorithm can achieve effective point cloud classification for point cloud data from different indoor and outdoor scenes, and the proposed VLBP features can improve the accuracy of point cloud classification.


2021 ◽  
pp. 1-13
Author(s):  
Tiebo Sun ◽  
Jinhao Liu ◽  
Jiangming Kan ◽  
Tingting Sui

Aiming at the problem of automatic classification of point cloud in the investigation of vegetation resources in the straw checkerboard barriers region, an improved random forest point cloud classification algorithm was proposed. According to the problems of decision tree redundancy and absolute majority voting in the existing random forest algorithm, first the similarity of the decision tree was calculated based on the tree edit distance, further clustered reduction based on the maximum and minimum distance algorithm, and then introduced classification accuracy of decision tree to construct weight matrix to implement weighted voting at the voting stage. Before random forest classification, based on the characteristics of point cloud data, a total of 20 point cloud single-point features and multi-point statistical features were selected to participate in point cloud classification, based on the point cloud data spatial distribution characteristics, three different scales for selecting point cloud neighborhoods were set based on the point cloud density, point cloud classification feature sets at different scales were constructed, optimizing important features of point cloud to participate in point cloud classification calculation after variable importance scored. The experimental results showed that the point cloud classification based on the optimized random forest algorithm in this paper achieved a total classification accuracy of 94.15% in dataset 1 acquired by lidar, the overall accuracy of classification on dataset 2 obtained by dense matching reaches 92.03%, both were higher than the unoptimized random forest algorithm and MRF, SVM point cloud classification method, and dimensionality reduction through feature optimization can greatly improve the efficiency of the algorithm.


Author(s):  
Y. Zhao ◽  
Q. Hu ◽  
W. Hu

This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects’ information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.


2020 ◽  
Vol 17 (4) ◽  
pp. 721-725 ◽  
Author(s):  
Rong Huang ◽  
Danfeng Hong ◽  
Yusheng Xu ◽  
Wei Yao ◽  
Uwe Stilla

2020 ◽  
Vol 10 (13) ◽  
pp. 4486 ◽  
Author(s):  
Yongbeom Lee ◽  
Seongkeun Park

In this paper, we propose a deep learning-based perception method in autonomous driving systems using a Light Detection and Ranging(LiDAR) point cloud data, which is called a simultaneous segmentation and detection network (SSADNet). SSADNet can be used to recognize both drivable areas and obstacles, which is necessary for autonomous driving. Unlike the previous methods, where separate networks were needed for segmentation and detection, SSADNet can perform segmentation and detection simultaneously based on a single neural network. The proposed method uses point cloud data obtained from a 3D LiDAR for network input to generate a top view image consisting of three channels of distance, height, and reflection intensity. The structure of the proposed network includes a branch for segmentation and a branch for detection as well as a bridge connecting the two parts. The KITTI dataset, which is often used for experiments on autonomous driving, was used for training. The experimental results show that segmentation and detection can be performed simultaneously for drivable areas and vehicles at a quick inference speed, which is appropriate for autonomous driving systems.


Author(s):  
Rui Guo ◽  
Yong Zhou ◽  
Jiaqi Zhao ◽  
Rui Yao ◽  
Bing Liu ◽  
...  

Domain adaption is a special transfer learning method, whose source domain and target domain generally have different data distribution, but need to complete the same task. There have been many significant types of research on domain adaptation in 2D images, but in 3D data processing, domain adaptation is still in its infancy. Therefore, we design a novel domain adaptive network to complete the unsupervised point cloud classification task. Specifically, we propose a multi-scale transform module to improve the feature extractor. Besides, a spatial-awareness attention module combined with channel attention to assign weights to each node is designed to represent hierarchically scaled features. We have validated the proposed method on the PointDA-10 dataset for domain adaption classification tasks. Empirically, it shows strong performance on par or even better than state-of-the-art.


2018 ◽  
Vol 30 (4) ◽  
pp. 523-531 ◽  
Author(s):  
Yoshihiro Takita ◽  

This paper proposes a method for creating 3D occupancy grid maps using multi-layer 3D LIDAR and a swing mechanism termed Swing-LIDAR. The method using Swing-LIDAR can acquire 10 times more data at a stopping position than a method that does not use Swing-LIDAR. High-definition and accurate terrain information is obtained by a coordinate transformation of the acquired data compensated for by the measured orientation of the system. In this study, we develop a method to create 3D grid maps for autonomous robots using Swing-LIDAR. To validate the method, AR Skipper is run on the created maps that are used to obtain point cloud data without a swing mechanism, and 11 sets of each local map are combined. The experimental results exhibit the differences among the maps.


2018 ◽  
Vol 10 (4) ◽  
pp. 612 ◽  
Author(s):  
Lei Wang ◽  
Yuchun Huang ◽  
Jie Shan ◽  
Liu He

Sign in / Sign up

Export Citation Format

Share Document