scholarly journals Registration of Multi-Sensor Bathymetric Point Clouds in Rural Areas Using Point-to-Grid Distances

2019 ◽  
Vol 8 (4) ◽  
pp. 178 ◽  
Author(s):  
Richard Boerner ◽  
Yusheng Xu ◽  
Ramona Baran ◽  
Frank Steinbacher ◽  
Ludwig Hoegner ◽  
...  

This article proposes a method for registration of two different point clouds with different point densities and noise recorded by airborne sensors in rural areas. In particular, multi-sensor point clouds with different point densities are considered. The proposed method is marker-less and uses segmented ground areas for registration.Therefore, the proposed approach offers the possibility to fuse point clouds of different sensors in rural areas within an accuracy of fine registration. In general, such registration is solved with extensive use of control points. The source point cloud is used to calculate a DEM of the ground which is further used to calculate point to raster distances of all points of the target point cloud. Furthermore, each cell of the raster DEM gets a height variance, further addressed as reconstruction accuracy, by calculating the grid. An outlier removal based on a dynamic threshold of distances is used to gain more robustness against noise and small geometry variations. The transformation parameters are calculated with an iterative least-squares optimization of the distances weighted with respect to the reconstruction accuracies of the grid. Evaluations consider two flight campaigns of the Mangfall area inBavaria, Germany, taken with different airborne LiDAR sensors with different point density. The accuracy of the proposed approach is evaluated on the whole flight strip of approximately eight square kilometers as well as on selected scenes in a closer look. For all scenes, it obtained an accuracy of rotation parameters below one tenth degrees and accuracy of translation parameters below the point spacing and chosen cell size of the raster. Furthermore, the possibility of registration of airborne LiDAR and photogrammetric point clouds from UAV taken images is shown with a similar result. The evaluation also shows the robustness of the approach in scenes where a classical iterative closest point (ICP) fails.

Author(s):  
R. Boerner ◽  
Y. Xu ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This paper shows a method to register point clouds from images of UAV-mounted airborne cameras as well as airborne laser scanner data. The focus is a general technique which does rely neither on linear or planar structures nor on the point cloud density. Therefore, the proposed approach is also suitable for rural areas and water bodies captured via different sensor configurations. This approach is based on a regular 2.5D grid generated from the segmented ground points of the 3D point cloud. It is assumed that initial values for the registration are already estimated, e.g. by measured exterior orientation parameters with the UAV mounted GNSS and IMU. These initial parameters are finely tuned by minimizing the distances between the 3D points of a target point cloud to the generated grid of the source point cloud in an iteration process. To eliminate outliers (e.g., vegetation points) a threshold for the distances is defined dynamically at each iteration step, which filters ground points during the registration. The achieved accuracy of the registration is up to 0.4<span class="thinspace"></span>m in translation and up to 0.3<span class="thinspace"></span>degrees in rotation, by using a raster size of the DEM of 2<span class="thinspace"></span>m. Considering the ground sampling distance of the airborne data which is up to 0.4<span class="thinspace"></span>m between the scan lines, this result is comparable to the result achieved by an ICP algorithm, but the proposed approach does not rely on point densities and is therefore able to solve registrations where the ICP have difficulties.</p>


Author(s):  
R. A. Persad ◽  
C. Armenakis

The automatic alignment of 3D point clouds acquired or generated from different sensors is a challenging problem. The objective of the alignment is to estimate the 3D similarity transformation parameters, including a global scale factor, 3 rotations and 3 translations. To do so, corresponding anchor features are required in both data sets. There are two main types of alignment: i) Coarse alignment and ii) Refined Alignment. Coarse alignment issues include lack of any prior knowledge of the respective coordinate systems for a source and target point cloud pair and the difficulty to extract and match corresponding control features (e.g., points, lines or planes) co-located on both point cloud pairs to be aligned. With the increasing use of UAVs, there is a need to automatically co-register their generated point cloud-based digital surface models with those from other data acquisition systems such as terrestrial or airborne lidar point clouds. This works presents a comparative study of two independent feature matching techniques for addressing 3D conformal point cloud alignment of UAV and lidar data in different 3D coordinate systems without any prior knowledge of the seven transformation parameters.


Author(s):  
S. Goebbels ◽  
R. Pohle-Fröhlich ◽  
P. Pricken

<p><strong>Abstract.</strong> The Iterative Closest Point algorithm (ICP) is a standard tool for registration of a source to a target point cloud. In this paper, ICP in point-to-plane mode is adopted to city models that are defined in CityGML. With this new point-to-model version of the algorithm, a coarsely registered photogrammetric point cloud can be matched with buildings’ polygons to provide, e.g., a basis for automated 3D facade modeling. In each iteration step, source points are projected to these polygons to find correspondences. Then an optimization problem is solved to find an affine transformation that maps source points to their correspondences as close as possible. Whereas standard ICP variants do not perform scaling, our algorithm is capable of isotropic scaling. This is necessary because photogrammetric point clouds obtained by the structure from motion algorithm typically are scaled randomly. Two test scenarios indicate that the presented algorithm is faster than ICP in point-to-plane mode on sampled city models.</p>


2019 ◽  
Vol 12 (1) ◽  
pp. 4
Author(s):  
Tiangang Yin ◽  
Jianbo Qi ◽  
Bruce D. Cook ◽  
Douglas C. Morton ◽  
Shanshan Wei ◽  
...  

Airborne lidar point clouds of vegetation capture the 3-D distribution of its scattering elements, including leaves, branches, and ground features. Assessing the contribution from vegetation to the lidar point clouds requires an understanding of the physical interactions between the emitted laser pulses and their targets. Most of the current methods to estimate the gap probability ( P gap ) or leaf area index (LAI) from small-footprint airborne laser scan (ALS) point clouds rely on either point-number-based (PNB) or intensity-based (IB) approaches, with additional empirical correlations with field measurements. However, site-specific parameterizations can limit the application of certain methods to other landscapes. The universality evaluation of these methods requires a physically based radiative transfer model that accounts for various lidar instrument specifications and environmental conditions. We conducted an extensive study to compare these approaches for various 3-D forest scenes using a point-cloud simulator developed for the latest version of the discrete anisotropic radiative transfer (DART) model. We investigated a range of variables for possible lidar point intensity, including radiometric quantities derived from Gaussian Decomposition (GD), such as the peak amplitude, standard deviation, integral of Gaussian profiles, and reflectance. The results disclosed that the PNB methods fail to capture the exact P gap as footprint size increases. By contrast, we verified that physical methods using lidar point intensity defined by either the distance-weighted integral of Gaussian profiles or reflectance can estimate P gap and LAI with higher accuracy and reliability. Additionally, the removal of certain additional empirical correlation coefficients is feasible. Routine use of small-footprint point-cloud radiometric measures to estimate P gap and the LAI potentially confirms a departure from previous empirical studies, but this depends on additional parameters from lidar instrument vendors.


2014 ◽  
Vol 513-517 ◽  
pp. 3680-3683 ◽  
Author(s):  
Xiao Xu Leng ◽  
Jun Xiao ◽  
Deng Yu Li

As the first step in 3D point cloud process, registration plays an critical role in determining the quality of subsequent results. In this paper, an initial registration algorithm of point clouds based on random sampling is proposed. In the proposed algorithm, the base points set is first extracted randomly in the target point cloud, next an optimal corresponding points set is got from the source point cloud, then a transform matrix is estimated based on the two sets with least square methods, finally the matrix is applied on the source point cloud. Experimental results show that this algorithm has ideal precision as well as good robustness.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 104
Author(s):  
Zaide Duran ◽  
Kubra Ozcan ◽  
Muhammed Enes Atik

With the development of photogrammetry technologies, point clouds have found a wide range of use in academic and commercial areas. This situation has made it essential to extract information from point clouds. In particular, artificial intelligence applications have been used to extract information from point clouds to complex structures. Point cloud classification is also one of the leading areas where these applications are used. In this study, the classification of point clouds obtained by aerial photogrammetry and Light Detection and Ranging (LiDAR) technology belonging to the same region is performed by using machine learning. For this purpose, nine popular machine learning methods have been used. Geometric features obtained from point clouds were used for the feature spaces created for classification. Color information is also added to these in the photogrammetric point cloud. According to the LiDAR point cloud results, the highest overall accuracies were obtained as 0.96 with the Multilayer Perceptron (MLP) method. The lowest overall accuracies were obtained as 0.50 with the AdaBoost method. The method with the highest overall accuracy was achieved with the MLP (0.90) method. The lowest overall accuracy method is the GNB method with 0.25 overall accuracy.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Author(s):  
Z. Hui ◽  
P. Cheng ◽  
L. Wang ◽  
Y. Xia ◽  
H. Hu ◽  
...  

<p><strong>Abstract.</strong> Denoising is a key pre-processing step for many airborne LiDAR point cloud applications. However, the previous algorithms have a number of problems, which affect the quality of point cloud post-processing, such as DTM generation. In this paper, a novel automated denoising algorithm is proposed based on empirical mode decomposition to remove outliers from airborne LiDAR point cloud. Comparing with traditional point cloud denoising algorithms, the proposed method can detect outliers from a signal processing perspective. Firstly, airborne LiDAR point clouds are decomposed into a series of intrinsic mode functions with the help of morphological operations, which would significantly decrease the computational complexity. By applying OTSU algorithm to these intrinsic mode functions, noise-dominant components can be detected and filtered. Finally, outliers are detected automatically by comparing observed elevations and reconstructed elevations. Three datasets located at three different cities in China were used to verify the validity and robustness of the proposed method. The experimental results demonstrate that the proposed method removes both high and low outliers effectively with various terrain features while preserving useful ground details.</p>


2021 ◽  
Vol 13 (20) ◽  
pp. 4031
Author(s):  
Ine Rosier ◽  
Jan Diels ◽  
Ben Somers ◽  
Jos Van Orshoven

Rural European landscapes are characterized by a variety of vegetated landscape elements. Although it is often not their main function, they have the potential to affect river discharge and the frequency, extent, depth and duration of floods downstream by creating both hydrological discontinuities and connections across the landscape. Information about the extent to which individual landscape elements and their spatial location affect peak river discharge and flood frequency and severity in agricultural catchments under specific meteorological conditions is limited. This knowledge gap can partly be explained by the lack of exhaustive inventories of the presence, geometry, and hydrological traits of vegetated landscape elements (vLEs), which in turn is due to the lack of appropriate techniques and source data to produce such inventories and keep them up to date. In this paper, a multi-step methodology is proposed to delineate and classify vLEs based on LiDAR point cloud data in three study areas in Flanders, Belgium. We classified the LiDAR point cloud data into the classes ‘vegetated landscape element point’ and ‘other’ using a Random Forest model with an accuracy classification score ranging between 0.92 and 0.97. The landscape element objects were further classified into the classes ‘tree object’ and ‘shrub object’ using a Logistic Regression model with an area-based accuracy ranging between 0.34 and 0.95.


Sign in / Sign up

Export Citation Format

Share Document