scholarly journals MAPPING BUILDING INTERIORS WITH LIDAR: CLASSIFYING THE POINT CLOUD WITH ARCGIS

Author(s):  
J. R. Parent ◽  
C. Witharana ◽  
M. Bradley

Abstract. Accurate maps of building interiors are needed to support location-based services, plan for emergencies, and manage facilities. However, suitable maps to meet these needs are not available for many buildings. Handheld LiDAR scanners provide an effective tool to collect data for indoor mapping but there are no well-established methods for classifying features in indoor point clouds. The goal of this research was to develop an efficient manual procedure for classifying indoor point clouds to represent features-of-interest.We used Paracosm’s PX-80 handheld LiDAR scanner to collect point cloud and image data for 11 buildings, which encompassed a variety of architectures. ESRI’s ArcGIS Desktop was used to digitize features that were easily identified in the point cloud and Paracosm’s Retrace was used to digitize features for which imagery was needed for efficient identification. We developed several tools in Python to facilitate the process. We focused on classifying 29 features-of-interest to public safety personnel including walls, doors, windows, fire alarms, smoke detectors, and sprinklers.The method we developed was efficient, accurate, and allowed successful mapping of features as small as a sprinkler head. Point cloud classification for a 14,000 m2 building took 20–40 hours, depending on building characteristics. Although the method is based on manual digitization, it provides a practical solution for indoor mapping using LiDAR. The methods can be applied in mapping a wide variety of features in indoor or outdoor environments.

Author(s):  
Y. Gao ◽  
M. C. Li

Abstract. Airborne Light Detection And Ranging (LiDAR) has become an important means for efficient and high-precision acquisition of 3D spatial data of large scenes. It has important application value in digital cities and location-based services. The classification and identification of point cloud is the basis of its application, and it is also a hot and difficult problem in the field of geographic information science.The difficulty of LiDAR point cloud classification in large-scale urban scenes is: On the one hand, the urban scene LiDAR point cloud contains rich and complex features, many types of features, different shapes, complex structures, and mutual occlusion, resulting in large data loss; On the other hand, the LiDAR scanner is far away from the urban features, and is like a car, a pedestrian, etc., which is in motion during the scanning process, which causes a certain degree of data noise of the point cloud and uneven density of the point cloud.Aiming at the characteristics of LiDAR point cloud in urban scene.The main work of this paper implements a method based on the saliency dictionary and Latent Dirichlet Allocation (LDA) model for LiDAR point cloud classification. The method uses the tag information of the training data and the tag source of each dictionary item to construct a significant dictionary learning model in sparse coding to expresses the feature of the point set more accurately.And it also uses the multi-path AdaBoost classifier to perform the features of the multi-level point set. The classification of point clouds is realized based on the supervised method. The experimental results show that the feature set extracted by the method combined with the multi-path classifier can significantly improve the cloud classification accuracy of complex city market attractions.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Y. Cao ◽  
M. Previtali ◽  
M. Scaioni

Abstract. In the wake of the success of Deep Learning Networks (DLN) for image recognition, object detection, shape classification and semantic segmentation, this approach has proven to be both a major breakthrough and an excellent tool in point cloud classification. However, understanding how different types of DLN achieve still lacks. In several studies the output of segmentation/classification process is compared against benchmarks, but the network is treated as a “black-box” and intermediate steps are not deeply analysed. Specifically, here the following questions are discussed: (1) what exactly did DLN learn from a point cloud? (2) On the basis of what information do DLN make decisions? To conduct such a quantitative investigation of these DLN applied to point clouds, this paper investigates the visual interpretability for the decision-making process. Firstly, we introduce a reconstruction network able to reconstruct and visualise the learned features, in order to face with question (1). Then, we propose 3DCAM to indicate the discriminative point cloud regions used by these networks to identify that category, thus dealing with question (2). Through answering the above two questions, the paper would like to offer some initial solutions to better understand the application of DLN to point clouds.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2020 ◽  
Vol 12 (14) ◽  
pp. 2181
Author(s):  
Hangbin Wu ◽  
Huimin Yang ◽  
Shengyu Huang ◽  
Doudou Zeng ◽  
Chun Liu ◽  
...  

The existing deep learning methods for point cloud classification are trained using abundant labeled samples and used to test only a few samples. However, classification tasks are diverse, and not all tasks have enough labeled samples for training. In this paper, a novel point cloud classification method for indoor components using few labeled samples is proposed to solve the problem of the requirement for abundant labeled samples for training with deep learning classification methods. This method is composed of four parts: mixing samples, feature extraction, dimensionality reduction, and semantic classification. First, the few labeled point clouds are mixed with unlabeled point clouds. Next, the mixed high-dimensional features are extracted using a deep learning framework. Subsequently, a nonlinear manifold learning method is used to embed the mixed features into a low-dimensional space. Finally, the few labeled point clouds in each cluster are identified, and semantic labels are provided for unlabeled point clouds in the same cluster by a neighborhood search strategy. The validity and versatility of the proposed method were validated by different experiments and compared with three state-of-the-art deep learning methods. Our method uses fewer than 30 labeled point clouds to achieve an accuracy that is 1.89–19.67% greater than existing methods. More importantly, the experimental results suggest that this method is not only suitable for single-attribute indoor scenarios but also for comprehensive complex indoor scenarios.


2013 ◽  
Vol 760-762 ◽  
pp. 1556-1561
Author(s):  
Ting Wei Du ◽  
Bo Liu

Indoor scene understanding based on the depth image data is a cutting-edge issue in the field of three-dimensional computer vision. Taking the layout characteristics of the indoor scenes and more plane features in these scenes into account, this paper presents a depth image segmentation method based on Gauss Mixture Model clustering. First, transform the Kinect depth image data into point cloud which is in the form of discrete three-dimensional point data, and denoise and down-sample the point cloud data; second, calculate the point normal of all points in the entire point cloud, then cluster the entire normal using Gaussian Mixture Model, and finally implement the entire point clouds segmentation by RANSAC algorithm. Experimental results show that the divided regions have obvious boundaries and segmentation quality is above normal, and lay a good foundation for object recognition.


2019 ◽  
Vol 9 (5) ◽  
pp. 951 ◽  
Author(s):  
Yong Li ◽  
Guofeng Tong ◽  
Xiance Du ◽  
Xiang Yang ◽  
Jianjun Zhang ◽  
...  

3D point cloud classification has wide applications in the field of scene understanding. Point cloud classification based on points can more accurately segment the boundary region between adjacent objects. In this paper, a point cloud classification algorithm based on a single point multilevel features fusion and pyramid neighborhood optimization are proposed for a Airborne Laser Scanning (ALS) point cloud. First, the proposed algorithm determines the neighborhood region of each point, after which the features of each single point are extracted. For the characteristics of the ALS point cloud, two new feature descriptors are proposed, i.e., a normal angle distribution histogram and latitude sampling histogram. Following this, multilevel features of a single point are constructed by multi-resolution of the point cloud and multi-neighborhood spaces. Next, the features are trained by the Support Vector Machine based on a Gaussian kernel function, and the points are classified by the trained model. Finally, a classification results optimization method based on a multi-scale pyramid neighborhood constructed by a multi-resolution point cloud is used. In the experiment, the algorithm is tested by a public dataset. The experimental results show that the proposed algorithm can effectively classify large-scale ALS point clouds. Compared with the existing algorithms, the proposed algorithm has a better classification performance.


2018 ◽  
Vol 10 (8) ◽  
pp. 1192 ◽  
Author(s):  
Chen-Chieh Feng ◽  
Zhou Guo

The automating classification of point clouds capturing urban scenes is critical for supporting applications that demand three-dimensional (3D) models. Achieving this goal, however, is met with challenges because of the varying densities of the point clouds and the complexity of the 3D data. In order to increase the level of automation in the point cloud classification, this study proposes a segment-based parameter learning method that incorporates a two-dimensional (2D) land cover map, in which a strategy of fusing the 2D land cover map and the 3D points is first adopted to create labelled samples, and a formalized procedure is then implemented to automatically learn the following parameters of point cloud classification: the optimal scale of the neighborhood for segmentation, optimal feature set, and the training classifier. It comprises four main steps, namely: (1) point cloud segmentation; (2) sample selection; (3) optimal feature set selection; and (4) point cloud classification. Three datasets containing the point cloud data were used in this study to validate the efficiency of the proposed method. The first two datasets cover two areas of the National University of Singapore (NUS) campus while the third dataset is a widely used benchmark point cloud dataset of Oakland, Pennsylvania. The classification parameters were learned from the first dataset consisting of a terrestrial laser-scanning data and a 2D land cover map, and were subsequently used to classify both of the NUS datasets. The evaluation of the classification results showed overall accuracies of 94.07% and 91.13%, respectively, indicating that the transition of the knowledge learned from one dataset to another was satisfactory. The classification of the Oakland dataset achieved an overall accuracy of 97.08%, which further verified the transferability of the proposed approach. An experiment of the point-based classification was also conducted on the first dataset and the result was compared to that of the segment-based classification. The evaluation revealed that the overall accuracy of the segment-based classification is indeed higher than that of the point-based classification, demonstrating the advantage of the segment-based approaches.


Author(s):  
E. Barnefske ◽  
H. Sternberg

<p><strong>Abstract.</strong> Point clouds give a very detailed and sometimes very accurate representation of the geometry of captured objects. In surveying, point clouds captured with laser scanners or camera systems are an intermediate result that must be processed further. Often the point cloud has to be divided into regions of similar types (object classes) for the next process steps. These classifications are very time-consuming and cost-intensive compared to acquisition. In order to automate this process step, conventional neural networks (ConvNet), which take over the classification task, are investigated in detail. In addition to the network architecture, the classification performance of a ConvNet depends on the training data with which the task is learned. This paper presents and evaluates the point clould classification tool (PCCT) developed at HCU Hamburg. With the PCCT, large point cloud collections can be semi-automatically classified. Furthermore, the influence of erroneous points in three-dimensional point clouds is investigated. The network architecture PointNet is used for this investigation.</p>


Author(s):  
Wenju Wang ◽  
Tao Wang ◽  
Yu Cai

AbstractClassifying 3D point clouds is an important and challenging task in computer vision. Currently, classification methods using multiple views lose characteristic or detail information during the representation or processing of views. For this reason, we propose a multi-view attention-convolution pooling network framework for 3D point cloud classification tasks. This framework uses Res2Net to extract the features from multiple 2D views. Our attention-convolution pooling method finds more useful information in the input data related to the current output, effectively solving the problem of feature information loss caused by feature representation and the detail information loss during dimensionality reduction. Finally, we obtain the probability distribution of the model to be classified using a full connection layer and the softmax function. The experimental results show that our framework achieves higher classification accuracy and better performance than other contemporary methods using the ModelNet40 dataset.


Sign in / Sign up

Export Citation Format

Share Document