Deep Learning Model for Point Cloud Classification Based on Graph Convolutional Network

2019 ◽  
Vol 56 (21) ◽  
pp. 211004
Author(s):  
王旭娇 Wang Xujiao ◽  
马杰 Ma Jie ◽  
王楠楠 Wang Nannan ◽  
马鹏飞 Ma Pengfei ◽  
杨立闯 Yang Lichaung
2020 ◽  
Vol 12 (14) ◽  
pp. 2181
Author(s):  
Hangbin Wu ◽  
Huimin Yang ◽  
Shengyu Huang ◽  
Doudou Zeng ◽  
Chun Liu ◽  
...  

The existing deep learning methods for point cloud classification are trained using abundant labeled samples and used to test only a few samples. However, classification tasks are diverse, and not all tasks have enough labeled samples for training. In this paper, a novel point cloud classification method for indoor components using few labeled samples is proposed to solve the problem of the requirement for abundant labeled samples for training with deep learning classification methods. This method is composed of four parts: mixing samples, feature extraction, dimensionality reduction, and semantic classification. First, the few labeled point clouds are mixed with unlabeled point clouds. Next, the mixed high-dimensional features are extracted using a deep learning framework. Subsequently, a nonlinear manifold learning method is used to embed the mixed features into a low-dimensional space. Finally, the few labeled point clouds in each cluster are identified, and semantic labels are provided for unlabeled point clouds in the same cluster by a neighborhood search strategy. The validity and versatility of the proposed method were validated by different experiments and compared with three state-of-the-art deep learning methods. Our method uses fewer than 30 labeled point clouds to achieve an accuracy that is 1.89–19.67% greater than existing methods. More importantly, the experimental results suggest that this method is not only suitable for single-attribute indoor scenarios but also for comprehensive complex indoor scenarios.


2020 ◽  
Vol 384 ◽  
pp. 192-199 ◽  
Author(s):  
Yikuan Yu ◽  
Zitian Huang ◽  
Fei Li ◽  
Haodong Zhang ◽  
Xinyi Le

Geosciences ◽  
2019 ◽  
Vol 9 (7) ◽  
pp. 323 ◽  
Author(s):  
Gordana Jakovljevic ◽  
Miro Govedarica ◽  
Flor Alvarez-Taboada ◽  
Vladimir Pajic

Digital elevation model (DEM) has been frequently used for the reduction and management of flood risk. Various classification methods have been developed to extract DEM from point clouds. However, the accuracy and computational efficiency need to be improved. The objectives of this study were as follows: (1) to determine the suitability of a new method to produce DEM from unmanned aerial vehicle (UAV) and light detection and ranging (LiDAR) data, using a raw point cloud classification and ground point filtering based on deep learning and neural networks (NN); (2) to test the convenience of rebalancing datasets for point cloud classification; (3) to evaluate the effect of the land cover class on the algorithm performance and the elevation accuracy; and (4) to assess the usability of the LiDAR and UAV structure from motion (SfM) DEM in flood risk mapping. In this paper, a new method of raw point cloud classification and ground point filtering based on deep learning using NN is proposed and tested on LiDAR and UAV data. The NN was trained on approximately 6 million points from which local and global geometric features and intensity data were extracted. Pixel-by-pixel accuracy assessment and visual inspection confirmed that filtering point clouds based on deep learning using NN is an appropriate technique for ground classification and producing DEM, as for the test and validation areas, both ground and non-ground classes achieved high recall (>0.70) and high precision values (>0.85), which showed that the two classes were well handled by the model. The type of method used for balancing the original dataset did not have a significant influence in the algorithm accuracy, and it was suggested not to use any of them unless the distribution of the generated and real data set will remain the same. Furthermore, the comparisons between true data and LiDAR and a UAV structure from motion (UAV SfM) point clouds were analyzed, as well as the derived DEM. The root mean square error (RMSE) and the mean average error (MAE) of the DEM were 0.25 m and 0.05 m, respectively, for LiDAR data, and 0.59 m and –0.28 m, respectively, for UAV data. For all land cover classes, the UAV DEM overestimated the elevation, whereas the LIDAR DEM underestimated it. The accuracy was not significantly different in the LiDAR DEM for the different vegetation classes, while for the UAV DEM, the RMSE increased with the height of the vegetation class. The comparison of the inundation areas derived from true LiDAR and UAV data for different water levels showed that in all cases, the largest differences were obtained for the lowest water level tested, while they performed best for very high water levels. Overall, the approach presented in this work produced DEM from LiDAR and UAV data with the required accuracy for flood mapping according to European Flood Directive standards. Although LiDAR is the recommended technology for point cloud acquisition, a suitable alternative is also UAV SfM in hilly areas.


2018 ◽  
Vol 10 (4) ◽  
pp. 612 ◽  
Author(s):  
Lei Wang ◽  
Yuchun Huang ◽  
Jie Shan ◽  
Liu He

2022 ◽  
Vol 14 (2) ◽  
pp. 303
Author(s):  
Haiqiang Yang ◽  
Xinming Zhang ◽  
Zihan Li ◽  
Jianxun Cui

Region-level traffic information can characterize dynamic changes of urban traffic at the macro level. Real-time region-level traffic prediction help city traffic managers with traffic demand analysis, traffic congestion control, and other activities, and it has become a research hotspot. As more vehicles are equipped with GPS devices, remote sensing data can be collected and used to conduct data-driven region-level-based traffic prediction. However, due to dynamism and randomness of urban traffic and the complexity of urban road networks, the study of such issues faces many challenges. This paper proposes a new deep learning model named TmS-GCN to predict region-level traffic information, which is composed of Graph Convolutional Network (GCN) and Gated Recurrent Unit (GRU). The GCN part captures spatial dependence among regions, while the GRU part captures the dynamic change of traffic within the region. Model verification and comparison are carried out using real taxi GPS data from Shenzhen. The experimental results show that the proposed model outperforms both the classic time series prediction model and the deep learning model at different scales.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6969
Author(s):  
Xiangda Lei ◽  
Hongtao Wang ◽  
Cheng Wang ◽  
Zongze Zhao ◽  
Jianqi Miao ◽  
...  

Airborne laser scanning (ALS) point cloud has been widely used in various fields, for it can acquire three-dimensional data with a high accuracy on a large scale. However, due to the fact that ALS data are discretely, irregularly distributed and contain noise, it is still a challenge to accurately identify various typical surface objects from 3D point cloud. In recent years, many researchers proved better results in classifying 3D point cloud by using different deep learning methods. However, most of these methods require a large number of training samples and cannot be widely used in complex scenarios. In this paper, we propose an ALS point cloud classification method to integrate an improved fully convolutional network into transfer learning with multi-scale and multi-view deep features. First, the shallow features of the airborne laser scanning point cloud such as height, intensity and change of curvature are extracted to generate feature maps by multi-scale voxel and multi-view projection. Second, these feature maps are fed into the pre-trained DenseNet201 model to derive deep features, which are used as input for a fully convolutional neural network with convolutional and pooling layers. By using this network, the local and global features are integrated to classify the ALS point cloud. Finally, a graph-cuts algorithm considering context information is used to refine the classification results. We tested our method on the semantic 3D labeling dataset of the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that overall accuracy and the average F1 score obtained by the proposed method is 89.84% and 83.62%, respectively, when only 16,000 points of the original data are used for training.


Sign in / Sign up

Export Citation Format

Share Document