scholarly journals 3D Ocean Water Wave Surface Analysis on Airborne LiDAR Bathymetric Point Clouds

2021 ◽  
Vol 13 (19) ◽  
pp. 3918
Author(s):  
Sajjad Roshandel ◽  
Weiquan Liu ◽  
Cheng Wang ◽  
Jonathan Li

Water wave monitoring is a vital issue for coastal research and plays a key role in geomorphological changes, erosion and sediment transportation, coastal hazards, risk assessment, and decision making. However, despite missing data and the difficulty of capturing the data of nearshore fieldwork, the analysis of water wave surface parameters is still able to be discussed. In this paper, we propose a novel approach for accurate detection and analysis of water wave surface from Airborne LiDAR Bathymetry (ALB) large-scale point clouds data. In our proposed method we combined the modified Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering method with a connectivity constraint and a multi-level analysis of ocean water surface. We adapted for most types of wave shape anatomies in shallow waters, nearshore, and onshore of the coastal zone. We used a wavelet analysis filter to detect the water wave surface. Then, through the Fourier Transformation Approach, we estimated the parameters of wave height, wavelength, and wave orientation. The comparison between the LiDAR measure estimation technique and available buoy data was then presented. We quantified the performance of the algorithm by measuring the precision and recall for the waves identification without evaluating the degree of over-segmentation. The proposed method achieves 87% accuracy of wave identification in the shallow water of coastal zones.

Author(s):  
Z. Li ◽  
W. Zhang ◽  
J. Shan

Abstract. Building models are conventionally reconstructed by building roof points via planar segmentation and then using a topology graph to group the planes together. Roof edges and vertices are then mathematically represented by intersecting segmented planes. Technically, such solution is based on sequential local fitting, i.e., the entire data of one building are not simultaneously participating in determining the building model. As a consequence, the solution is lack of topological integrity and geometric rigor. Fundamentally different from this traditional approach, we propose a holistic parametric reconstruction method which means taking into consideration the entire point clouds of one building simultaneously. In our work, building models are reconstructed from predefined parametric (roof) primitives. We first use a well-designed deep neural network to segment and identify primitives in the given building point clouds. A holistic optimization strategy is then introduced to simultaneously determine the parameters of a segmented primitive. In the last step, the optimal parameters are used to generate a watertight building model in CityGML format. The airborne LiDAR dataset RoofN3D with predefined roof types is used for our test. It is shown that PointNet++ applied to the entire dataset can achieve an accuracy of 83% for primitive classification. For a subset of 910 buildings in RoofN3D, the holistic approach is then used to determine the parameters of primitives and reconstruct the buildings. The achieved overall quality of reconstruction is 0.08 meters for point-surface-distance or 0.7 times RMSE of the input LiDAR points. This study demonstrates the efficiency and capability of the proposed approach and its potential to handle large scale urban point clouds.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Author(s):  
Y. Gao ◽  
M. C. Li

Abstract. Airborne Light Detection And Ranging (LiDAR) has become an important means for efficient and high-precision acquisition of 3D spatial data of large scenes. It has important application value in digital cities and location-based services. The classification and identification of point cloud is the basis of its application, and it is also a hot and difficult problem in the field of geographic information science.The difficulty of LiDAR point cloud classification in large-scale urban scenes is: On the one hand, the urban scene LiDAR point cloud contains rich and complex features, many types of features, different shapes, complex structures, and mutual occlusion, resulting in large data loss; On the other hand, the LiDAR scanner is far away from the urban features, and is like a car, a pedestrian, etc., which is in motion during the scanning process, which causes a certain degree of data noise of the point cloud and uneven density of the point cloud.Aiming at the characteristics of LiDAR point cloud in urban scene.The main work of this paper implements a method based on the saliency dictionary and Latent Dirichlet Allocation (LDA) model for LiDAR point cloud classification. The method uses the tag information of the training data and the tag source of each dictionary item to construct a significant dictionary learning model in sparse coding to expresses the feature of the point set more accurately.And it also uses the multi-path AdaBoost classifier to perform the features of the multi-level point set. The classification of point clouds is realized based on the supervised method. The experimental results show that the feature set extracted by the method combined with the multi-path classifier can significantly improve the cloud classification accuracy of complex city market attractions.


Author(s):  
E. Widyaningrum ◽  
B. G. H. Gorte

The integration of computer vision and photogrammetry to generate three-dimensional (3D) information from images has contributed to a wider use of point clouds, for mapping purposes. Large-scale topographic map production requires 3D data with high precision and accuracy to represent the real conditions of the earth surface. Apart from LiDAR point clouds, the image-based matching is also believed to have the ability to generate reliable and detailed point clouds from multiple-view images. In order to examine and analyze possible fusion of LiDAR and image-based matching for large-scale detailed mapping purposes, point clouds are generated by Semi Global Matching (SGM) and by Structure from Motion (SfM). In order to conduct comprehensive and fair comparison, this study uses aerial photos and LiDAR data that were acquired at the same time. Qualitative and quantitative assessments have been applied to evaluate LiDAR and image-matching point clouds data in terms of visualization, geometric accuracy, and classification result. The comparison results conclude that LiDAR is the best data for large-scale mapping.


Author(s):  
E. Maset ◽  
B. Padova ◽  
A. Fusiello

Abstract. Nowadays, we are witnessing an increasing availability of large-scale airborne LiDAR (Light Detection and Ranging) data, that greatly improve our knowledge of urban areas and natural environment. In order to extract useful information from these massive point clouds, appropriate data processing is required, including point cloud classification. In this paper we present a deep learning method to efficiently perform the classification of large-scale LiDAR data, ensuring a good trade-off between speed and accuracy. The algorithm employs the projection of the point cloud into a two-dimensional image, where every pixel stores height, intensity, and echo information of the point falling in the pixel. The image is then segmented by a Fully Convolutional Network (FCN), assigning a label to each pixel and, consequently, to the corresponding point. In particular, the proposed approach is applied to process a dataset of 7700 km2 that covers the entire Friuli Venezia Giulia region (Italy), allowing to distinguish among five classes (ground, vegetation, roof, overground and power line), with an overall accuracy of 92.9%.


Author(s):  
G. Jóźków ◽  
B. Vander Jagt ◽  
C. Toth

The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry), and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. <br><br> The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse) point cloud.


2021 ◽  
Author(s):  
Michael Mahoney ◽  
Lucas Johnson ◽  
Eddie Bevilacqua ◽  
Colin Beier

Airborne LiDAR has become an essential data source for large-scale, high-resolution modeling of forest biomass and carbon stocks, enabling predictions with much higher resolution and accuracy than can be achieved using optical imagery alone. Ground noise filtering -- that is, excluding returns from LiDAR point clouds based on simple height thresholds -- is a common practice meant to improve the 'signal' content of LiDAR returns by preventing ground returns from masking useful information about tree size and condition contained within canopy returns. Although this procedure originated in LiDAR-based estimation of mean tree and canopy height, ground noise filtering has remained prevalent in LiDAR pre-processing, even as modelers have shifted focus to forest aboveground biomass (AGB) and related characteristics for which ground returns may actually contain useful information about stand density and openness. In particular, ground returns may be helpful for making accurate biomass predictions in heterogeneous landscapes that include a patchy mosaic of vegetation heights and land cover types. In this paper, we applied several ground noise filtering thresholds while mapping two study areas in New York (USA), one a forest-dominated area and the other a mixed-use landscape. We observed that removing ground noise via any height threshold systematically biases many of the LiDAR-derived variables used in AGB modeling. By fitting random forest models to each of these predictor sets, we found that that ground noise filtering yields models of forest AGB with lower accuracy than models trained using predictors derived from unfiltered point clouds. The relative inferiority of AGB models based on filtered LiDAR returns was much greater for the mixed land-cover study area than for the contiguously forested study area. Our results suggest that ground filtering should be avoided when mapping biomass, particularly when mapping heterogeneous and highly patchy landscapes, as ground returns are more likely to represent useful 'signal' than extraneous 'noise' in these cases.


Author(s):  
Mathieu Turgeon-Pelchat ◽  
Samuel Foucher ◽  
Yacine Bouroubi

Sign in / Sign up

Export Citation Format

Share Document