High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

2017 ◽  
Vol 11 (1) ◽  
pp. 016011 ◽  
Author(s):  
Permata Nur Miftahur Rizki ◽  
Heezin Lee ◽  
Minsu Lee ◽  
Sangyoon Oh
2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773540 ◽  
Author(s):  
Robert A Hewitt ◽  
Alex Ellery ◽  
Anton de Ruiter

A classifier training methodology is presented for Kapvik, a micro-rover prototype. A simulated light detection and ranging scan is divided into a grid, with each cell having a variety of characteristics (such as number of points, point variance and mean height) which act as inputs to classification algorithms. The training step avoids the need for time-consuming and error-prone manual classification through the use of a simulation that provides training inputs and target outputs. This simulation generates various terrains that could be encountered by a planetary rover, including untraversable ones, in a random fashion. A sensor model for a three-dimensional light detection and ranging is used with ray tracing to generate realistic noisy three-dimensional point clouds where all points that belong to untraversable terrain are labelled explicitly. A neural network classifier and its training algorithm are presented, and the results of its output as well as other popular classifiers show high accuracy on test data sets after training. The network is then tested on outdoor data to confirm it can accurately classify real-world light detection and ranging data. The results show the network is able to identify terrain correctly, falsely classifying just 4.74% of untraversable terrain.


2009 ◽  
Vol 24 (2) ◽  
pp. 95-102 ◽  
Author(s):  
Hans-Erik Andersen

Abstract Airborne laser scanning (also known as light detection and ranging or LIDAR) data were used to estimate three fundamental forest stand condition classes (forest stand size, land cover type, and canopy closure) at 32 Forest Inventory Analysis (FIA) plots distributed over the Kenai Peninsula of Alaska. Individual tree crown segment attributes (height, area, and species type) were derived from the three-dimensional LIDAR point cloud, LIDAR-based canopy height models, and LIDAR return intensity information. The LIDAR-based crown segment and canopy cover information was then used to estimate condition classes at each 10-m grid cell on a 300 × 300-m area surrounding each FIA plot. A quantitative comparison of the LIDAR- and field-based condition classifications at the subplot centers indicates that LIDAR has potential as a useful sampling tool in an operational forest inventory program.


Author(s):  
A. W. Lyda ◽  
X. Zhang ◽  
C. L. Glennie ◽  
K. Hudnut ◽  
B. A. Brooks

Remote sensing via LiDAR (Light Detection And Ranging) has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array). In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS) data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, “moving window,” to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection algorithms. Ground deformation results and statistics from these techniques are presented and discussed here with supplementary analyses of the differences between techniques and the effects of temporal spacing between LiDAR datasets. Results show that both change detection methods provide consistent near field deformation comparable to field observed offsets. The deformation can vary in quality but estimated standard deviations are always below thirty one centimeters. This variation in quality differentiates the methods and proves that factors such as geodetic markers and temporal spacing play major roles in the outcomes of ALS change detection surveys.


2015 ◽  
Vol 54 (4) ◽  
pp. 044106
Author(s):  
Lingbing Bu ◽  
Zujing Qiu ◽  
Haiyang Gao ◽  
Aizhen Gao ◽  
Xingyou Huang

2020 ◽  
Vol 12 (9) ◽  
pp. 1379 ◽  
Author(s):  
Yi-Ting Cheng ◽  
Ankit Patel ◽  
Chenglu Wen ◽  
Darcy Bullock ◽  
Ayman Habib

Lane markings are one of the essential elements of road information, which is useful for a wide range of transportation applications. Several studies have been conducted to extract lane markings through intensity thresholding of Light Detection and Ranging (LiDAR) point clouds acquired by mobile mapping systems (MMS). This paper proposes an intensity thresholding strategy using unsupervised intensity normalization and a deep learning strategy using automatically labeled training data for lane marking extraction. For comparative evaluation, original intensity thresholding and deep learning using manually established labels strategies are also implemented. A pavement surface-based assessment of lane marking extraction by the four strategies is conducted in asphalt and concrete pavement areas covered by MMS equipped with multiple LiDAR scanners. Additionally, the extracted lane markings are used for lane width estimation and reporting lane marking gaps along various highways. The normalized intensity thresholding leads to a better lane marking extraction with an F1-score of 78.9% in comparison to the original intensity thresholding with an F1-score of 72.3%. On the other hand, the deep learning model trained with automatically generated labels achieves a higher F1-score of 85.9% than the one trained on manually established labels with an F1-score of 75.1%. In concrete pavement area, the normalized intensity thresholding and both deep learning strategies obtain better lane marking extraction (i.e., lane markings along longer segments of the highway have been extracted) than the original intensity thresholding approach. For the lane width results, more estimates are observed, especially in areas with poor edge lane marking, using the two deep learning models when compared with the intensity thresholding strategies due to the higher recall rates for the former. The outcome of the proposed strategies is used to develop a framework for reporting lane marking gap regions, which can be subsequently visualized in RGB imagery to identify their cause.


2021 ◽  
Vol 11 (13) ◽  
pp. 5941
Author(s):  
Mun-yong Lee ◽  
Sang-ha Lee ◽  
Kye-dong Jung ◽  
Seung-hyun Lee ◽  
Soon-chul Kwon

Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with recording and transmission. In this study, we propose a method of efficiently managing and compressing animation information stored in the 3D point-clouds sequence. A compressed point-cloud is created by reconfiguring the points based on their voxels. Compared with the original point-cloud, noise caused by errors is removed, and a preprocessing procedure that achieves high performance in a redundant processing algorithm is proposed. The results of experiments and rendering demonstrate an average file-size reduction of 40% using the proposed algorithm. Moreover, 13% of the over-lap data are extracted and removed, and the file size is further reduced.


Sign in / Sign up

Export Citation Format

Share Document