scholarly journals Performance of parameter-domain and spatial-domain pole-like feature segmentation using single and multiple terrestrial laser scans

Author(s):  
Z. Lari ◽  
K. Al-Durgham ◽  
A. Habib

Terrestrial laser scanning (TLS) systems have been established as a leading tool for the acquisition of high density three-dimensional point clouds from physical objects. The collected point clouds by these systems can be utilized for a wide spectrum of object extraction, modelling, and monitoring applications. Pole-like features are among the most important objects that can be extracted from TLS data especially those acquired in urban areas and industrial sites. However, these features cannot be completely extracted and modelled using a single TLS scan due to significant local point density variations and occlusions caused by the other objects. Therefore, multiple TLS scans from different perspectives should be integrated through a registration procedure to provide a complete coverage of the pole-like features in a scene. To date, different segmentation approaches have been proposed for the extraction of pole-like features from either single or multiple-registered TLS scans. These approaches do not consider the internal characteristics of a TLS point cloud (local point density variations and noise level in data) and usually suffer from computational inefficiency. To overcome these problems, two recently-developed PCA-based parameter-domain and spatial-domain approaches for the segmentation of pole-like features are introduced, in this paper. Moreover, the performance of the proposed segmentation approaches for the extraction of pole-like features from a single or multiple-registered TLS scans is investigated in this paper. The alignment of the utilized TLS scans is implemented using an Iterative Closest Projected Point (ICPP) registration procedure. Qualitative and quantitative evaluation of the extracted pole-like features from single and multiple-registered TLS scans, using both of the proposed segmentation approaches, is conducted to verify the extraction of more complete pole-like features using multipleregistered TLS scans.

2021 ◽  
Vol 13 (22) ◽  
pp. 4497
Author(s):  
Jianjun Zou ◽  
Zhenxin Zhang ◽  
Dong Chen ◽  
Qinghua Li ◽  
Lan Sun ◽  
...  

Point cloud registration is the foundation and key step for many vital applications, such as digital city, autonomous driving, passive positioning, and navigation. The difference of spatial objects and the structure complexity of object surfaces are the main challenges for the registration problem. In this paper, we propose a graph attention capsule model (named as GACM) for the efficient registration of terrestrial laser scanning (TLS) point cloud in the urban scene, which fuses graph attention convolution and a three-dimensional (3D) capsule network to extract local point cloud features and obtain 3D feature descriptors. These descriptors can take into account the differences of spatial structure and point density in objects and make the spatial features of ground objects more prominent. During the training progress, we used both matched points and non-matched points to train the model. In the test process of the registration, the points in the neighborhood of each keypoint were sent to the trained network, in order to obtain feature descriptors and calculate the rotation and translation matrix after constructing a K-dimensional (KD) tree and random sample consensus (RANSAC) algorithm. Experiments show that the proposed method achieves more efficient registration results and higher robustness than other frontier registration methods in the pairwise registration of point clouds.


Author(s):  
Philipp-Roman Hirt ◽  
Yusheng Xu ◽  
Ludwig Hoegner ◽  
Uwe Stilla

AbstractTrees play an important role in the complex system of urban environments. Their benefits to environment and health are manifold. Yet, especially near streets, the traffic can be impaired by a limited clearance. Even injuries could be caused by breaking tree parts. Hence, it is important to capture the trees in the frame of a tree cadastre and to ensure regular monitoring. Mobile laser scanning (MLS) can be used for data acquisition, followed by an automated analysis of the point clouds acquired over time. The presented approach uses occupancy grids with a grid size of 10 cm, which enable the comparison of several epochs in three-dimensional space. Prior to that, a segmentation of single tree objects is conducted. After cylinder-based trunk localisation, closely neighboured tree crowns are separated using weights derived from local point densities. Therefore, changes for every single tree can be derived with regard to its parameters and its point cloud. The testing area is set along an urban street in Munich, Germany, using the publicly available benchmark data sets TUM-MLS-2016/2018. In the frame of the evaluation, tree objects are geo-referenced and mapped in 2D. The tree parameters height and diameter at breast height are derived. The geometric evaluation of the change analysis facilitates not only the acquisition of stock changes, but also the detection of shape changes for the tree objects.


Author(s):  
G. Gabara ◽  
P. Sawicki

Abstract. The term “3D building models” is used in relation to the CityGML models and building information modelling. Reconstruction and modelling of 3D building objects in urban areas becomes a common trend and finds a wide spectrum of utilitarian applications. The paper presents the quality assessment of two multifaceted 3D building models, which were obtained from two open-access databases: Polish national Geoportal (accuracy in LOD 2 standard) and Trimble SketchUp Warehouse (accuracy in LOD 2 standard with information about architectural details of façades). The Geoportal 3D models were primary created based on the airborne laser scanning data (density 12 pts/sq. m, elevation accuracy to 0.10 m) collected during Informatic System for Country Protection against extraordinary hazards project. The testing was performed using different validation low-altitude photogrammetric datasets: RIEGL LMS-Q680i airborne laser scanning point cloud (min. density 25 pts/sq. m and height accuracy 0.03 m), and image-based Phase One iXU-RS 1000 point cloud (average accuracy in the horizontal and in the vertical plane is respectively to 0.015 m and 0.030 m). The visual comparison, heat maps with the function of the signed distance, and histograms in predefined ranges were used to evaluate the quality and accuracy of 3D building models. The aspect of error sources that occurred during the modelling process was also discussed.


Author(s):  
R. Huang ◽  
Z. Ye ◽  
D. Hong ◽  
Y. Xu ◽  
U. Stilla

<p><strong>Abstract.</strong> In this paper, we propose a framework for obtaining semantic labels of LiDAR point clouds and refining the classification results by combining a deep neural network with a graph-structured smoothing technique. In general, the goal of the semantic scene analysis is to assign a semantic label to each point in the point cloud. Although various related researches have been reported, due to the complexity of urban areas, the semantic labeling of point clouds in urban areas is still a challenging task. In this paper, we address the issues of how to effectively extract features from each point and its local surrounding and how to refine the initial soft labels by considering contextual information in the spatial domain. Specifically, we improve the effectiveness of classification of point cloud in two aspects. Firstly, instead of utilizing handcrafted features as input for classification and refinement, the local context of a point is embedded into deep dimensional space and classified via a deep neural network (PointNet++), and simultaneously soft labels are obtained as initial results for next refinement. Secondly, the initial label probability set is improved by taking the context both in the spatial domain into consideration by constructing a graph structure, and the final labels are optimized by a graph cuts algorithm. To evaluate the performance of our proposed framework, experiments are conducted on a mobile laser scanning (MLS) point cloud dataset. We demonstrate that our approach can achieve higher accuracy in comparison to several commonly-used state-of-the-art baselines. The overall accuracy of our proposed method on TUM dataset can reach 85.38% for labeling eight semantic classes.</p>


2019 ◽  
Vol 11 (13) ◽  
pp. 1540 ◽  
Author(s):  
Wang ◽  
Chen ◽  
Zhu ◽  
Liu ◽  
Li ◽  
...  

Urban planning and management need accurate three-dimensional (3D) data such as light detection and ranging (LiDAR) point clouds. The mobile laser scanning (MLS) data, with up to millimeter-level accuracy and point density of a few thousand points/m2, have gained increasing attention in urban applications. Substantial research has been conducted in the past decade. This paper conducted a comprehensive survey of urban applications and key techniques based on MLS point clouds. We first introduce the key characteristics of MLS systems and the corresponding point clouds, and present the challenges and opportunities of using the data. Next, we summarize the current applications of using MLS over urban areas, including transportation infrastructure mapping, building information modeling, utility surveying and mapping, vegetation inventory, and autonomous vehicle driving. Then, we review common key issues for processing and analyzing MLS point clouds, including classification methods, object recognition, data registration, data fusion, and 3D city modeling. Finally, we discuss the future prospects for MLS technology and urban applications.


2021 ◽  
Vol 13 (11) ◽  
pp. 2135
Author(s):  
Jesús Balado ◽  
Pedro Arias ◽  
Henrique Lorenzo ◽  
Adrián Meijide-Rodríguez

Mobile Laser Scanning (MLS) systems have proven their usefulness in the rapid and accurate acquisition of the urban environment. From the generated point clouds, street furniture can be extracted and classified without manual intervention. However, this process of acquisition and classification is not error-free, caused mainly by disturbances. This paper analyses the effect of three disturbances (point density variation, ambient noise, and occlusions) on the classification of urban objects in point clouds. From point clouds acquired in real case studies, synthetic disturbances are generated and added. The point density reduction is generated by downsampling in a voxel-wise distribution. The ambient noise is generated as random points within the bounding box of the object, and the occlusion is generated by eliminating points contained in a sphere. Samples with disturbances are classified by a pre-trained Convolutional Neural Network (CNN). The results showed different behaviours for each disturbance: density reduction affected objects depending on the object shape and dimensions, ambient noise depending on the volume of the object, while occlusions depended on their size and location. Finally, the CNN was re-trained with a percentage of synthetic samples with disturbances. An improvement in the performance of 10–40% was reported except for occlusions with a radius larger than 1 m.


Author(s):  
J. Gehrung ◽  
M. Hebel ◽  
M. Arens ◽  
U. Stilla

Abstract. Change detection is an important tool for processing multiple epochs of mobile LiDAR data in an efficient manner, since it allows to cope with an otherwise time-consuming operation by focusing on regions of interest. State-of-the-art approaches usually either do not handle the case of incomplete observations or are computationally expensive. We present a novel method based on a combination of point clouds and voxels that is able to handle said case, thereby being computationally less expensive than comparable approaches. Furthermore, our method is able to identify special classes of changes such as partially moved, fully moved and deformed objects in addition to the appeared and disappeared objects recognized by conventional approaches. The performance of our method is evaluated using the publicly available TUM City Campus datasets, showing an overall accuracy of 88 %.


Author(s):  
Bisheng Yang ◽  
Yuan Liu ◽  
Fuxun Liang ◽  
Zhen Dong

High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


Sign in / Sign up

Export Citation Format

Share Document