scholarly journals Positional Accuracy Assessment of Lidar Point Cloud from NAIP/3DEP Pilot Project

2020 ◽  
Vol 12 (12) ◽  
pp. 1974 ◽  
Author(s):  
Minsu Kim ◽  
Seonkyung Park ◽  
Jeffrey Irwin ◽  
Collin McCormick ◽  
Jeffrey Danielson ◽  
...  

The Leica Geosystems CountryMapper hybrid system has the potential to collect data that satisfy the U.S. Geological Survey (USGS) National Geospatial Program (NGP) and 3D Elevation Program (3DEP) and the U.S. Department of Agriculture (USDA) National Agriculture Imagery Program (NAIP) requirements in a single collection. This research will help 3DEP determine if this sensor has the potential to meet current and future 3DEP topographic lidar collection requirements. We performed an accuracy analysis and assessment on the lidar point cloud produced from CountryMapper. The boresighting calibration and co-registration by georeferencing correction based on ground control points are assumed to be performed by the data provider. The scope of the accuracy assessment is to apply the following variety of ways to measure the accuracy of the delivered point cloud to obtain the error statistics. Intraswath uncertainty from a flat surface was computed to evaluate the point cloud precision. Intraswath difference between opposite scan directions and the interswath overlap difference were evaluated to find boresighting or any systematic errors. Absolute vertical accuracy over vegetated and non-vegetated areas were also assessed. Both horizontal and vertical absolute errors were assessed using the 3D absolute error analysis methodology of comparing conjugate points derived from geometric features. A three-plane feature makes a single unique intersection point. Intersection points were computed from ground-based lidar and airborne lidar point clouds for comparison. The difference between two intersection points form one error vector. The geometric feature-based error analysis was applied to intraswath, interswath, and absolute error analysis. The CountryMapper pilot data appear to satisfy the accuracy requirements suggested by the USGS lidar specification, based upon the error analysis results. The focus of this research was to demonstrate various conventional accuracy measures and novel 3D accuracy techniques using two different error computation methods on the CountryMapper airborne lidar point cloud.

2019 ◽  
Vol 11 (23) ◽  
pp. 2737 ◽  
Author(s):  
Minsu Kim ◽  
Seonkyung Park ◽  
Jeffrey Danielson ◽  
Jeffrey Irwin ◽  
Gregory Stensaas ◽  
...  

The traditional practice to assess accuracy in lidar data involves calculating RMSEz (root mean square error of the vertical component). Accuracy assessment of lidar point clouds in full 3D (three dimension) is not routinely performed. The main challenge in assessing accuracy in full 3D is how to identify a conjugate point of a ground-surveyed checkpoint in the lidar point cloud with the smallest possible uncertainty value. Relatively coarse point-spacing in airborne lidar data makes it challenging to determine a conjugate point accurately. As a result, a substantial unwanted error is added to the inherent positional uncertainty of the lidar data. Unless we keep this additional error small enough, the 3D accuracy assessment result will not properly represent the inherent uncertainty. We call this added error “external uncertainty,” which is associated with conjugate point identification. This research developed a general external uncertainty model using three-plane intersections and accounts for several factors (sensor precision, feature dimension, and point density). This method can be used for lidar point cloud data from a wide range of sensor qualities, point densities, and sizes of the features of interest. The external uncertainty model was derived as a semi-analytical function that takes the number of points on a plane as an input. It is a normalized general function that can be scaled by smooth surface precision (SSP) of a lidar system. This general uncertainty model provides a quantitative guideline on the required conditions for the conjugate point based on the geometric features. Applications of the external uncertainty model were demonstrated using various lidar point cloud data from the U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) library to determine the valid conditions for a conjugate point from three-plane modeling.


2019 ◽  
Vol 8 (4) ◽  
pp. 178 ◽  
Author(s):  
Richard Boerner ◽  
Yusheng Xu ◽  
Ramona Baran ◽  
Frank Steinbacher ◽  
Ludwig Hoegner ◽  
...  

This article proposes a method for registration of two different point clouds with different point densities and noise recorded by airborne sensors in rural areas. In particular, multi-sensor point clouds with different point densities are considered. The proposed method is marker-less and uses segmented ground areas for registration.Therefore, the proposed approach offers the possibility to fuse point clouds of different sensors in rural areas within an accuracy of fine registration. In general, such registration is solved with extensive use of control points. The source point cloud is used to calculate a DEM of the ground which is further used to calculate point to raster distances of all points of the target point cloud. Furthermore, each cell of the raster DEM gets a height variance, further addressed as reconstruction accuracy, by calculating the grid. An outlier removal based on a dynamic threshold of distances is used to gain more robustness against noise and small geometry variations. The transformation parameters are calculated with an iterative least-squares optimization of the distances weighted with respect to the reconstruction accuracies of the grid. Evaluations consider two flight campaigns of the Mangfall area inBavaria, Germany, taken with different airborne LiDAR sensors with different point density. The accuracy of the proposed approach is evaluated on the whole flight strip of approximately eight square kilometers as well as on selected scenes in a closer look. For all scenes, it obtained an accuracy of rotation parameters below one tenth degrees and accuracy of translation parameters below the point spacing and chosen cell size of the raster. Furthermore, the possibility of registration of airborne LiDAR and photogrammetric point clouds from UAV taken images is shown with a similar result. The evaluation also shows the robustness of the approach in scenes where a classical iterative closest point (ICP) fails.


2019 ◽  
Vol 12 (1) ◽  
pp. 4
Author(s):  
Tiangang Yin ◽  
Jianbo Qi ◽  
Bruce D. Cook ◽  
Douglas C. Morton ◽  
Shanshan Wei ◽  
...  

Airborne lidar point clouds of vegetation capture the 3-D distribution of its scattering elements, including leaves, branches, and ground features. Assessing the contribution from vegetation to the lidar point clouds requires an understanding of the physical interactions between the emitted laser pulses and their targets. Most of the current methods to estimate the gap probability ( P gap ) or leaf area index (LAI) from small-footprint airborne laser scan (ALS) point clouds rely on either point-number-based (PNB) or intensity-based (IB) approaches, with additional empirical correlations with field measurements. However, site-specific parameterizations can limit the application of certain methods to other landscapes. The universality evaluation of these methods requires a physically based radiative transfer model that accounts for various lidar instrument specifications and environmental conditions. We conducted an extensive study to compare these approaches for various 3-D forest scenes using a point-cloud simulator developed for the latest version of the discrete anisotropic radiative transfer (DART) model. We investigated a range of variables for possible lidar point intensity, including radiometric quantities derived from Gaussian Decomposition (GD), such as the peak amplitude, standard deviation, integral of Gaussian profiles, and reflectance. The results disclosed that the PNB methods fail to capture the exact P gap as footprint size increases. By contrast, we verified that physical methods using lidar point intensity defined by either the distance-weighted integral of Gaussian profiles or reflectance can estimate P gap and LAI with higher accuracy and reliability. Additionally, the removal of certain additional empirical correlation coefficients is feasible. Routine use of small-footprint point-cloud radiometric measures to estimate P gap and the LAI potentially confirms a departure from previous empirical studies, but this depends on additional parameters from lidar instrument vendors.


Author(s):  
B. Sirmacek ◽  
R. Lindenbergh

Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&amp;sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &amp;sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &amp;sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 104
Author(s):  
Zaide Duran ◽  
Kubra Ozcan ◽  
Muhammed Enes Atik

With the development of photogrammetry technologies, point clouds have found a wide range of use in academic and commercial areas. This situation has made it essential to extract information from point clouds. In particular, artificial intelligence applications have been used to extract information from point clouds to complex structures. Point cloud classification is also one of the leading areas where these applications are used. In this study, the classification of point clouds obtained by aerial photogrammetry and Light Detection and Ranging (LiDAR) technology belonging to the same region is performed by using machine learning. For this purpose, nine popular machine learning methods have been used. Geometric features obtained from point clouds were used for the feature spaces created for classification. Color information is also added to these in the photogrammetric point cloud. According to the LiDAR point cloud results, the highest overall accuracies were obtained as 0.96 with the Multilayer Perceptron (MLP) method. The lowest overall accuracies were obtained as 0.50 with the AdaBoost method. The method with the highest overall accuracy was achieved with the MLP (0.90) method. The lowest overall accuracy method is the GNB method with 0.25 overall accuracy.


Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Author(s):  
Z. Hui ◽  
P. Cheng ◽  
L. Wang ◽  
Y. Xia ◽  
H. Hu ◽  
...  

<p><strong>Abstract.</strong> Denoising is a key pre-processing step for many airborne LiDAR point cloud applications. However, the previous algorithms have a number of problems, which affect the quality of point cloud post-processing, such as DTM generation. In this paper, a novel automated denoising algorithm is proposed based on empirical mode decomposition to remove outliers from airborne LiDAR point cloud. Comparing with traditional point cloud denoising algorithms, the proposed method can detect outliers from a signal processing perspective. Firstly, airborne LiDAR point clouds are decomposed into a series of intrinsic mode functions with the help of morphological operations, which would significantly decrease the computational complexity. By applying OTSU algorithm to these intrinsic mode functions, noise-dominant components can be detected and filtered. Finally, outliers are detected automatically by comparing observed elevations and reconstructed elevations. Three datasets located at three different cities in China were used to verify the validity and robustness of the proposed method. The experimental results demonstrate that the proposed method removes both high and low outliers effectively with various terrain features while preserving useful ground details.</p>


2021 ◽  
Vol 13 (20) ◽  
pp. 4031
Author(s):  
Ine Rosier ◽  
Jan Diels ◽  
Ben Somers ◽  
Jos Van Orshoven

Rural European landscapes are characterized by a variety of vegetated landscape elements. Although it is often not their main function, they have the potential to affect river discharge and the frequency, extent, depth and duration of floods downstream by creating both hydrological discontinuities and connections across the landscape. Information about the extent to which individual landscape elements and their spatial location affect peak river discharge and flood frequency and severity in agricultural catchments under specific meteorological conditions is limited. This knowledge gap can partly be explained by the lack of exhaustive inventories of the presence, geometry, and hydrological traits of vegetated landscape elements (vLEs), which in turn is due to the lack of appropriate techniques and source data to produce such inventories and keep them up to date. In this paper, a multi-step methodology is proposed to delineate and classify vLEs based on LiDAR point cloud data in three study areas in Flanders, Belgium. We classified the LiDAR point cloud data into the classes ‘vegetated landscape element point’ and ‘other’ using a Random Forest model with an accuracy classification score ranging between 0.92 and 0.97. The landscape element objects were further classified into the classes ‘tree object’ and ‘shrub object’ using a Logistic Regression model with an area-based accuracy ranging between 0.34 and 0.95.


Sign in / Sign up

Export Citation Format

Share Document