scholarly journals Disturbance Analysis in the Classification of Objects Obtained from Urban LiDAR Point Clouds with Convolutional Neural Networks

2021 ◽  
Vol 13 (11) ◽  
pp. 2135
Author(s):  
Jesús Balado ◽  
Pedro Arias ◽  
Henrique Lorenzo ◽  
Adrián Meijide-Rodríguez

Mobile Laser Scanning (MLS) systems have proven their usefulness in the rapid and accurate acquisition of the urban environment. From the generated point clouds, street furniture can be extracted and classified without manual intervention. However, this process of acquisition and classification is not error-free, caused mainly by disturbances. This paper analyses the effect of three disturbances (point density variation, ambient noise, and occlusions) on the classification of urban objects in point clouds. From point clouds acquired in real case studies, synthetic disturbances are generated and added. The point density reduction is generated by downsampling in a voxel-wise distribution. The ambient noise is generated as random points within the bounding box of the object, and the occlusion is generated by eliminating points contained in a sphere. Samples with disturbances are classified by a pre-trained Convolutional Neural Network (CNN). The results showed different behaviours for each disturbance: density reduction affected objects depending on the object shape and dimensions, ambient noise depending on the volume of the object, while occlusions depended on their size and location. Finally, the CNN was re-trained with a percentage of synthetic samples with disturbances. An improvement in the performance of 10–40% was reported except for occlusions with a radius larger than 1 m.

Author(s):  
X. Roynard ◽  
J.-E. Deschaud ◽  
F. Goulette

Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.


Author(s):  
X. Roynard ◽  
J.-E. Deschaud ◽  
F. Goulette

Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.


Author(s):  
B. Sirmacek ◽  
R. Lindenbergh

Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into ’tree’ and ’non-tree’ classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the ’tree’ or ’non-tree’ class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3347 ◽  
Author(s):  
Zhishuang Yang ◽  
Bo Tan ◽  
Huikun Pei ◽  
Wanshou Jiang

The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.


Author(s):  
M. Lemmens

<p><strong>Abstract.</strong> A knowledge-based system exploits the knowledge, which a human expert uses for completing a complex task, through a database containing decision rules, and an inference engine. Already in the early nineties knowledge-based systems have been proposed for automated image classification. Lack of success faded out initial interest and enthusiasm, the same fate neural networks struck at that time. Today the latter enjoy a steady revival. This paper aims at demonstrating that a knowledge-based approach to automated classification of mobile laser scanning point clouds has promising prospects. An initial experiment exploiting only two features, height and reflectance value, resulted in an overall accuracy of 79<span class="thinspace"></span>% for the Paris-rue-Madame point cloud bench mark data set.</p>


2020 ◽  
Vol 35 (169) ◽  
pp. 81-107
Author(s):  
Rufei Liu ◽  
Peng Wang ◽  
Zhaojin Yan ◽  
Xiushan Lu ◽  
Minye Wang ◽  
...  

2018 ◽  
Vol 10 (8) ◽  
pp. 1192 ◽  
Author(s):  
Chen-Chieh Feng ◽  
Zhou Guo

The automating classification of point clouds capturing urban scenes is critical for supporting applications that demand three-dimensional (3D) models. Achieving this goal, however, is met with challenges because of the varying densities of the point clouds and the complexity of the 3D data. In order to increase the level of automation in the point cloud classification, this study proposes a segment-based parameter learning method that incorporates a two-dimensional (2D) land cover map, in which a strategy of fusing the 2D land cover map and the 3D points is first adopted to create labelled samples, and a formalized procedure is then implemented to automatically learn the following parameters of point cloud classification: the optimal scale of the neighborhood for segmentation, optimal feature set, and the training classifier. It comprises four main steps, namely: (1) point cloud segmentation; (2) sample selection; (3) optimal feature set selection; and (4) point cloud classification. Three datasets containing the point cloud data were used in this study to validate the efficiency of the proposed method. The first two datasets cover two areas of the National University of Singapore (NUS) campus while the third dataset is a widely used benchmark point cloud dataset of Oakland, Pennsylvania. The classification parameters were learned from the first dataset consisting of a terrestrial laser-scanning data and a 2D land cover map, and were subsequently used to classify both of the NUS datasets. The evaluation of the classification results showed overall accuracies of 94.07% and 91.13%, respectively, indicating that the transition of the knowledge learned from one dataset to another was satisfactory. The classification of the Oakland dataset achieved an overall accuracy of 97.08%, which further verified the transferability of the proposed approach. An experiment of the point-based classification was also conducted on the first dataset and the result was compared to that of the segment-based classification. The evaluation revealed that the overall accuracy of the segment-based classification is indeed higher than that of the point-based classification, demonstrating the advantage of the segment-based approaches.


Author(s):  
Z. Lari ◽  
K. Al-Durgham ◽  
A. Habib

Terrestrial laser scanning (TLS) systems have been established as a leading tool for the acquisition of high density three-dimensional point clouds from physical objects. The collected point clouds by these systems can be utilized for a wide spectrum of object extraction, modelling, and monitoring applications. Pole-like features are among the most important objects that can be extracted from TLS data especially those acquired in urban areas and industrial sites. However, these features cannot be completely extracted and modelled using a single TLS scan due to significant local point density variations and occlusions caused by the other objects. Therefore, multiple TLS scans from different perspectives should be integrated through a registration procedure to provide a complete coverage of the pole-like features in a scene. To date, different segmentation approaches have been proposed for the extraction of pole-like features from either single or multiple-registered TLS scans. These approaches do not consider the internal characteristics of a TLS point cloud (local point density variations and noise level in data) and usually suffer from computational inefficiency. To overcome these problems, two recently-developed PCA-based parameter-domain and spatial-domain approaches for the segmentation of pole-like features are introduced, in this paper. Moreover, the performance of the proposed segmentation approaches for the extraction of pole-like features from a single or multiple-registered TLS scans is investigated in this paper. The alignment of the utilized TLS scans is implemented using an Iterative Closest Projected Point (ICPP) registration procedure. Qualitative and quantitative evaluation of the extracted pole-like features from single and multiple-registered TLS scans, using both of the proposed segmentation approaches, is conducted to verify the extraction of more complete pole-like features using multipleregistered TLS scans.


Author(s):  
G. Tran ◽  
D. Nguyen ◽  
M. Milenkovic ◽  
N. Pfeifer

Full-waveform (FWF) LiDAR (Light Detection and Ranging) systems have their advantage in recording the entire backscattered signal of each emitted laser pulse compared to conventional airborne discrete-return laser scanner systems. The FWF systems can provide point clouds which contain extra attributes like amplitude and echo width, etc. In this study, a FWF data collected in 2010 for Eisenstadt, a city in the eastern part of Austria was used to classify four main classes: buildings, trees, waterbody and ground by employing a decision tree. Point density, echo ratio, echo width, normalised digital surface model and point cloud roughness are the main inputs for classification. The accuracy of the final results, correctness and completeness measures, were assessed by comparison of the classified output to a knowledge-based labelling of the points. Completeness and correctness between 90% and 97% was reached, depending on the class. While such results and methods were presented before, we are investigating additionally the transferability of the classification method (features, thresholds …) to another urban FWF lidar point cloud. Our conclusions are that from the features used, only echo width requires new thresholds. A data-driven adaptation of thresholds is suggested.


Sign in / Sign up

Export Citation Format

Share Document