Using 2-Lines Congruent Sets for Coarse Registration of Terrestrial Point Clouds in Urban Scenes

Author(s):  
Ershuai Xu ◽  
Zhihua Xu ◽  
Keming Yang
2019 ◽  
Vol 151 ◽  
pp. 106-123 ◽  
Author(s):  
Yusheng Xu ◽  
Richard Boerner ◽  
Wei Yao ◽  
Ludwig Hoegner ◽  
Uwe Stilla

Author(s):  
A. Moussa ◽  
N. Elsheimy

Registration of point clouds is a necessary step to obtain a complete overview of scanned objects of interest. The majority of the current registration approaches target the general case where a full range of the registration parameters search space is assumed and searched. It is very common in urban objects scanning to have leveled point clouds with small roll and pitch angles and with also a small height differences. For such scenarios the registration search problem can be handled faster to obtain a coarse registration of two point clouds. In this paper, a fully automatic approach is proposed for registration of approximately leveled point clouds. The proposed approach estimates a coarse registration based on three registration parameters and then conducts a fine registration step using iterative closest point approach. The approach has been tested on three data sets of different areas and the achieved registration results validate the significance of the proposed approach.


Author(s):  
X.-F. Xing ◽  
M. A. Mostafavi ◽  
G. Edwards ◽  
N. Sabo

<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>


Author(s):  
G. G. Pessoa ◽  
R. C. Santos ◽  
A. C. Carrilho ◽  
M. Galo ◽  
A. Amorim

<p><strong>Abstract.</strong> Images and LiDAR point clouds are the two major data sources used by the photogrammetry and remote sensing community. Although different, the synergy between these two data sources has motivated exploration of the potential for combining data in various applications, especially for classification and extraction of information in urban environments. Despite the efforts of the scientific community, integrating LiDAR data and images remains a challenging task. For this reason, the development of Unmanned Aerial Vehicles (UAVs) along with the integration and synchronization of positioning receivers, inertial systems and off-the-shelf imaging sensors has enabled the exploitation of the high-density photogrammetric point cloud (PPC) as an alternative, obviating the need to integrate LiDAR and optical images. This study therefore aims to compare the results of PPC classification in urban scenes considering radiometric-only, geometric-only and combined radiometric and geometric data applied to the Random Forest algorithm. For this study the following classes were considered: buildings, asphalt, trees, grass, bare soil, sidewalks and power lines, which encompass the most common objects in urban scenes. The classification procedure was performed considering radiometric features (Green band, Red band, NIR band, NDVI and Saturation) and geometric features (Height – nDSM, Linearity, Planarity, Scatter, Anisotropy, Omnivariance and Eigenentropy). The quantitative analyses were performed by means of the classification error matrix using the following metrics: overall accuracy, recall and precision. The quantitative analyses present overall accuracy of 0.80, 0.74 and 0.98 for classification considering radiometric, geometric and both data combined, respectively.</p>


Author(s):  
Han Hu ◽  
Chongtai Chen ◽  
Bo Wu ◽  
Xiaoxia Yang ◽  
Qing Zhu ◽  
...  

Textureless and geometric discontinuities are major problems in state-of-the-art dense image matching methods, as they can cause visually significant noise and the loss of sharp features. Binary census transform is one of the best matching cost methods but in textureless areas, where the intensity values are similar, it suffers from small random noises. Global optimization for disparity computation is inherently sensitive to parameter tuning in complex urban scenes, and must compromise between smoothness and discontinuities. The aim of this study is to provide a method to overcome these issues in dense image matching, by extending the industry proven Semi-Global Matching through 1) developing a ternary census transform, which takes three outputs in a single order comparison and encodes the results in two bits rather than one, and also 2) by using texture-information to self-tune the parameters, which both preserves sharp edges and enforces smoothness when necessary. Experimental results using various datasets from different platforms have shown that the visual qualities of the triangulated point clouds in urban areas can be largely improved by these proposed methods.


2019 ◽  
Vol 11 (2) ◽  
pp. 186 ◽  
Author(s):  
Mingxue Zheng ◽  
Huayi Wu ◽  
Yong Li

It is fundamental for 3D city maps to efficiently classify objects of point clouds in urban scenes. However, it is still a large challenge to obtain massive training samples for point clouds and to sustain the huge training burden. To overcome it, a knowledge-based approach is proposed. The knowledge-based approach can explore discriminating features of objects based on people’s understanding of the surrounding environment, which exactly replaces the role of training samples. To implement the approach, a two-step segmentation procedure is carried out in this paper. In particular, Fourier Fitting is applied for second adaptive segmentation to separate points of multiple objects lying within a single group of the first segmentation. Then height difference and three geometrical eigen-features are extracted. In comparison to common classification methods, which need massive training samples, only basic knowledge of objects in urban scenes is needed to build an end-to-end match between objects and extracted features in the proposed approach. In addition, the proposed approach has high computational efficiency because of no heavy training process. Qualitative and quantificational experimental results show the proposed approach has promising performance for object classification in various urban scenes.


Author(s):  
F. Bracci ◽  
M. Drauschke ◽  
S. Kühne ◽  
Z.-C. Márton

Different platforms and sensors are used to derive 3d models of urban scenes. 3d reconstruction from satellite and aerial images are used to derive sparse models mainly showing ground and roof surfaces of entire cities. In contrast to such sparse models, 3d reconstructions from UAV or ground images are much denser and show building facades and street furniture as traffic signs and garbage bins. Furthermore, point clouds may also get acquired with LiDAR sensors. Point clouds do not only differ in the viewpoints, but also in their scales and point densities. Consequently, the fusion of such heterogeneous point clouds is highly challenging. Regarding urban scenes, another challenge is the occurence of only a few parallel planes where it is difficult to find the correct rotation parameters. We discuss the limitations of the general fusion methodology based on an initial alignment step followed by a local coregistration using ICP and present strategies to overcome them.


Sign in / Sign up

Export Citation Format

Share Document