Deep Learning based Classification of Color Point Cloud for 3D Reconstruction of Interior Elements of Buildings

Author(s):  
Shima Sahebdivani ◽  
Hossein Arefi ◽  
Mehdi Maboudi
Author(s):  
Mathieu Turgeon-Pelchat ◽  
Samuel Foucher ◽  
Yacine Bouroubi

Author(s):  
D. Tosic ◽  
S. Tuttas ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This work proposes an approach for semantic classification of an outdoor-scene point cloud acquired with a high precision Mobile Mapping System (MMS), with major goal to contribute to the automatic creation of High Definition (HD) Maps. The automatic point labeling is achieved by utilizing the combination of a feature-based approach for semantic classification of point clouds and a deep learning approach for semantic segmentation of images. Both, point cloud data, as well as the data from a multi-camera system are used for gaining spatial information in an urban scene. Two types of classification applied for this task are: 1) Feature-based approach, in which the point cloud is organized into a supervoxel structure for capturing geometric characteristics of points. Several geometric features are then extracted for appropriate representation of the local geometry, followed by removing the effect of local tendency for each supervoxel to enhance the distinction between similar structures. And lastly, the Random Forests (RF) algorithm is applied in the classification phase, for assigning labels to supervoxels and therefore to points within them. 2) The deep learning approach is employed for semantic segmentation of MMS images of the same scene. To achieve this, an implementation of Pyramid Scene Parsing Network is used. Resulting segmented images with each pixel containing a class label are then projected onto the point cloud, enabling label assignment for each point. At the end, experiment results are presented from a complex urban scene and the performance of this method is evaluated on a manually labeled dataset, for the deep learning and feature-based classification individually, as well as for the result of the labels fusion. The achieved overall accuracy with fusioned output is 0.87 on the final test set, which significantly outperforms the results of individual methods on the same point cloud. The labeled data is published on the TUM-PF Semantic-Labeling-Benchmark.</p>


Author(s):  
F. Matrone ◽  
A. Lingua ◽  
R. Pierdicca ◽  
E. S. Malinverni ◽  
M. Paolanti ◽  
...  

Abstract. The lack of benchmarking data for the semantic segmentation of digital heritage scenarios is hampering the development of automatic classification solutions in this field. Heritage 3D data feature complex structures and uncommon classes that prevent the simple deployment of available methods developed in other fields and for other types of data. The semantic classification of heritage 3D data would support the community in better understanding and analysing digital twins, facilitate restoration and conservation work, etc. In this paper, we present the first benchmark with millions of manually labelled 3D points belonging to heritage scenarios, realised to facilitate the development, training, testing and evaluation of machine and deep learning methods and algorithms in the heritage field. The proposed benchmark, available at http://archdataset.polito.it/, comprises datasets and classification results for better comparisons and insights into the strengths and weaknesses of different machine and deep learning approaches for heritage point cloud semantic segmentation, in addition to promoting a form of crowdsourcing to enrich the already annotated database.


Author(s):  
J. Pan ◽  
L. Li ◽  
H. Yamaguchi ◽  
K. Hasegawa ◽  
F. I. Thufail ◽  
...  

Abstract. This paper proposes a fused 3D transparent visualization method with the aim of achieving see-through imaging of large-scale cultural heritage by combining photogrammetry point cloud data and 3D reconstructed models. 3D reconstructed models are efficiently reconstructed from a single monocular photo using deep learning. It is demonstrated that the proposed method can be widely applied, particularly to instances of incomplete cultural heritages. In this study, the proposed method is applied to a typical example, the Borobudur temple in Indonesia. The Borobudur temple possesses the most complete collection of Buddhist reliefs. However, some parts of the Borobudur reliefs have been hidden by stone walls and became not visible following the reinforcements during the Dutch rule. Today, only gray-scale monocular photos of those hidden parts are displayed in the Borobudur Museum. In this paper, the visible parts of the temple are first digitized into point cloud data by photogrammetry scanning. For the hidden parts, a 3D reconstruction method based on deep learning is proposed to reconstruct the invisible parts into point cloud data directly from single monocular photos from the museum. The proposed 3D reconstruction method achieves 95% accuracy of the reconstructed point cloud on average. With the point cloud data of both the visible parts and the hidden parts, the proposed transparent visualization method called the stochastic point-based rendering is applied to achieve a fused 3D transparent visualization of the valuable temple.


2021 ◽  
Vol 13 (23) ◽  
pp. 4750
Author(s):  
Jianchang Chen ◽  
Yiming Chen ◽  
Zhengjun Liu

We propose the Point Cloud Tree Species Classification Network (PCTSCN) to overcome challenges in classifying tree species from laser data with deep learning methods. The network is mainly composed of two parts: a sampling component in the early stage and a feature extraction component in the later stage. We used geometric sampling to extract regions with local features from the tree contours since these tend to be species-specific. Then we used an improved Farthest Point Sampling method to extract the features from a global perspective. We input the intensity of the tree point cloud as a dimensional feature and spatial information into the neural network and mapped it to higher dimensions for feature extraction. We used the data obtained by Terrestrial Laser Scanning (TLS) and Unmanned Aerial Vehicle Laser Scanning (UAVLS) to conduct tree species classification experiments of white birch and larch. The experimental results showed that in both the TLS and UAVLS datasets, the input tree point cloud density and the highest feature dimensionality of the mapping had an impact on the classification accuracy of the tree species. When the single tree sample obtained by TLS consisted of 1024 points and the highest dimension of the network mapping was 512, the classification accuracy of the trained model reached 96%. For the individual tree samples obtained by UAVLS, which consisted of 2048 points and had the highest dimension of the network mapping of 1024, the classification accuracy of the trained model reached 92%. TLS data tree species classification accuracy of PCTSCN was improved by 2–9% compared with other models using the same point density, amount of data and highest feature dimension. The classification accuracy of tree species obtained by UAVLS was up to 8% higher. We propose PCTSCN to provide a new strategy for the intelligent classification of forest tree species.


Sign in / Sign up

Export Citation Format

Share Document