Classification of 3D Point Clouds By A New Augmentation Convolutional Neural Network

Author(s):  
Sheng Xu ◽  
Xuan Zhou ◽  
Weidu Ye ◽  
Qiaolin Ye
Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3347 ◽  
Author(s):  
Zhishuang Yang ◽  
Bo Tan ◽  
Huikun Pei ◽  
Wanshou Jiang

The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 87857-87869
Author(s):  
Jue Hou ◽  
Wenbin Ouyang ◽  
Bugao Xu ◽  
Rongwu Wang

2022 ◽  
Vol 41 (1) ◽  
pp. 1-21
Author(s):  
Chems-Eddine Himeur ◽  
Thibault Lejemble ◽  
Thomas Pellegrini ◽  
Mathias Paulin ◽  
Loic Barthe ◽  
...  

In recent years, Convolutional Neural Networks (CNN) have proven to be efficient analysis tools for processing point clouds, e.g., for reconstruction, segmentation, and classification. In this article, we focus on the classification of edges in point clouds, where both edges and their surrounding are described. We propose a new parameterization adding to each point a set of differential information on its surrounding shape reconstructed at different scales. These parameters, stored in a Scale-Space Matrix (SSM) , provide a well-suited information from which an adequate neural network can learn the description of edges and use it to efficiently detect them in acquired point clouds. After successfully applying a multi-scale CNN on SSMs for the efficient classification of edges and their neighborhood, we propose a new lightweight neural network architecture outperforming the CNN in learning time, processing time, and classification capabilities. Our architecture is compact, requires small learning sets, is very fast to train, and classifies millions of points in seconds.


Author(s):  
Sergei Voronin ◽  
Artyom Makovetskii ◽  
Aleksei Voronin ◽  
Dmitrii Zhernov

Author(s):  
Z. Xu ◽  
Z. Yang

The classification of point clouds is the first step in the extraction of various types of geo-information form point clouds. Recently the ISPRS WG II/4 provides a benchmark on 3D semantic labelling, a convolutional neural network based method achieves the best overall accuracy performance in all participants who only use the geometrical and waveform based features extracted from the ALS data. Features of the point are calculated in different scales to achieve the best performance. It is not efficiency for the future use. In this paper, we use an eigenentropy based scale selection strategy to improve this method. The scale selection strategy improves the average F1 score and makes the classification method more simple and efficient.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3681 ◽  
Author(s):  
Le Zhang ◽  
Jian Sun ◽  
Qiang Zheng

The recognition of three-dimensional (3D) lidar (light detection and ranging) point clouds remains a significant issue in point cloud processing. Traditional point cloud recognition employs the 3D point clouds from the whole object. Nevertheless, the lidar data is a collection of two-and-a-half-dimensional (2.5D) point clouds (each 2.5D point cloud comes from a single view) obtained by scanning the object within a certain field angle by lidar. To deal with this problem, we initially propose a novel representation which expresses 3D point clouds using 2.5D point clouds from multiple views and then we generate multi-view 2.5D point cloud data based on the Point Cloud Library (PCL). Subsequently, we design an effective recognition model based on a multi-view convolutional neural network. The model directly acts on the raw 2.5D point clouds from all views and learns to get a global feature descriptor by fusing the features from all views by the view fusion network. It has been proved that our approach can achieve an excellent recognition performance without any requirement for three-dimensional reconstruction and the preprocessing of point clouds. In conclusion, this paper can effectively solve the recognition problem of lidar point clouds and provide vital practical value.


Author(s):  
SU YAN ◽  
Lei Yu

Abstract Simultaneous Localization and Mapping (SLAM) is one of the key technologies used in sweepers, autonomous vehicles, virtual reality and other fields. This paper presents a dense RGB-D SLAM reconstruction algorithm based on convolutional neural network of multi-layer image invariant feature transformation. The main contribution of the system lies in the construction of a convolutional neural network based on multi-layer image invariant feature, which optimized the extraction of ORB (Oriented FAST and Rotated Brief) feature points and the reconstruction effect. After the feature point matching, pose estimation, loop detection and other steps, the 3D point clouds were finally spliced to construct a complete and smooth spatial model. The system can improve the accuracy and robustness in feature point processing and pose estimation. Comparative experiments show that the optimized algorithm saves 0.093s compared to the ordinary extraction algorithm while guaranteeing a high accuracy rate at the same time. The results of reconstruction experiments show that the spatial models have more clear details, smoother connection with no fault layers than the original ones. The reconstruction results are generally better than other common algorithms, such as Kintinuous, Elasticfusion and ORBSLAM2 dense reconstruction.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document