scholarly journals Training a terrain traversability classifier for a planetary rover through simulation

2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773540 ◽  
Author(s):  
Robert A Hewitt ◽  
Alex Ellery ◽  
Anton de Ruiter

A classifier training methodology is presented for Kapvik, a micro-rover prototype. A simulated light detection and ranging scan is divided into a grid, with each cell having a variety of characteristics (such as number of points, point variance and mean height) which act as inputs to classification algorithms. The training step avoids the need for time-consuming and error-prone manual classification through the use of a simulation that provides training inputs and target outputs. This simulation generates various terrains that could be encountered by a planetary rover, including untraversable ones, in a random fashion. A sensor model for a three-dimensional light detection and ranging is used with ray tracing to generate realistic noisy three-dimensional point clouds where all points that belong to untraversable terrain are labelled explicitly. A neural network classifier and its training algorithm are presented, and the results of its output as well as other popular classifiers show high accuracy on test data sets after training. The network is then tested on outdoor data to confirm it can accurately classify real-world light detection and ranging data. The results show the network is able to identify terrain correctly, falsely classifying just 4.74% of untraversable terrain.

Author(s):  
A. W. Lyda ◽  
X. Zhang ◽  
C. L. Glennie ◽  
K. Hudnut ◽  
B. A. Brooks

Remote sensing via LiDAR (Light Detection And Ranging) has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array). In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS) data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, “moving window,” to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection algorithms. Ground deformation results and statistics from these techniques are presented and discussed here with supplementary analyses of the differences between techniques and the effects of temporal spacing between LiDAR datasets. Results show that both change detection methods provide consistent near field deformation comparable to field observed offsets. The deformation can vary in quality but estimated standard deviations are always below thirty one centimeters. This variation in quality differentiates the methods and proves that factors such as geodetic markers and temporal spacing play major roles in the outcomes of ALS change detection surveys.


Author(s):  
A. W. Lyda ◽  
X. Zhang ◽  
C. L. Glennie ◽  
K. Hudnut ◽  
B. A. Brooks

Remote sensing via LiDAR (Light Detection And Ranging) has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array). In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS) data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, “moving window,” to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection algorithms. Ground deformation results and statistics from these techniques are presented and discussed here with supplementary analyses of the differences between techniques and the effects of temporal spacing between LiDAR datasets. Results show that both change detection methods provide consistent near field deformation comparable to field observed offsets. The deformation can vary in quality but estimated standard deviations are always below thirty one centimeters. This variation in quality differentiates the methods and proves that factors such as geodetic markers and temporal spacing play major roles in the outcomes of ALS change detection surveys.


2012 ◽  
Vol 594-597 ◽  
pp. 2361-2366 ◽  
Author(s):  
Feng Li ◽  
Xi Min Cui ◽  
Ling Zhang ◽  
Shu Wei Shan ◽  
Kun Lun Song

Automatically identifying and removing above-ground laser points from terrain surface is proved to be a challenging task for complicated and discontinuous scenarios. Eight methods have been developed and contrasted with each other for filtering LiDAR (Light Detection and Ranging) data. Only one approach is difficult to acquire high precisions for various landscapes. This paper presents a method filtering point clouds in which firstly a binary quadric trend surface is used to remove most non-terrain points by a defined height threshold and subsequently a progressive morphological filter further is employed to detect ground measurements. The experimental results demonstrate that this method yields less type I and total errors compared with other eight approaches based on ISPRS sample data sets.


2009 ◽  
Vol 24 (2) ◽  
pp. 95-102 ◽  
Author(s):  
Hans-Erik Andersen

Abstract Airborne laser scanning (also known as light detection and ranging or LIDAR) data were used to estimate three fundamental forest stand condition classes (forest stand size, land cover type, and canopy closure) at 32 Forest Inventory Analysis (FIA) plots distributed over the Kenai Peninsula of Alaska. Individual tree crown segment attributes (height, area, and species type) were derived from the three-dimensional LIDAR point cloud, LIDAR-based canopy height models, and LIDAR return intensity information. The LIDAR-based crown segment and canopy cover information was then used to estimate condition classes at each 10-m grid cell on a 300 × 300-m area surrounding each FIA plot. A quantitative comparison of the LIDAR- and field-based condition classifications at the subplot centers indicates that LIDAR has potential as a useful sampling tool in an operational forest inventory program.


2015 ◽  
Vol 54 (4) ◽  
pp. 044106
Author(s):  
Lingbing Bu ◽  
Zujing Qiu ◽  
Haiyang Gao ◽  
Aizhen Gao ◽  
Xingyou Huang

Author(s):  
T. Wakita ◽  
J. Susaki

In this study, we propose a method to accurately extract vegetation from terrestrial three-dimensional (3D) point clouds for estimating landscape index in urban areas. Extraction of vegetation in urban areas is challenging because the light returned by vegetation does not show as clear patterns as man-made objects and because urban areas may have various objects to discriminate vegetation from. The proposed method takes a multi-scale voxel approach to effectively extract different types of vegetation in complex urban areas. With two different voxel sizes, a process is repeated that calculates the eigenvalues of the planar surface using a set of points, classifies voxels using the approximate curvature of the voxel of interest derived from the eigenvalues, and examines the connectivity of the valid voxels. We applied the proposed method to two data sets measured in a residential area in Kyoto, Japan. The validation results were acceptable, with F-measures of approximately 95% and 92%. It was also demonstrated that several types of vegetation were successfully extracted by the proposed method whereas the occluded vegetation were omitted. We conclude that the proposed method is suitable for extracting vegetation in urban areas from terrestrial light detection and ranging (LiDAR) data. In future, the proposed method will be applied to mobile LiDAR data and the performance of the method against lower density of point clouds will be examined.


2020 ◽  
Vol 12 (9) ◽  
pp. 1379 ◽  
Author(s):  
Yi-Ting Cheng ◽  
Ankit Patel ◽  
Chenglu Wen ◽  
Darcy Bullock ◽  
Ayman Habib

Lane markings are one of the essential elements of road information, which is useful for a wide range of transportation applications. Several studies have been conducted to extract lane markings through intensity thresholding of Light Detection and Ranging (LiDAR) point clouds acquired by mobile mapping systems (MMS). This paper proposes an intensity thresholding strategy using unsupervised intensity normalization and a deep learning strategy using automatically labeled training data for lane marking extraction. For comparative evaluation, original intensity thresholding and deep learning using manually established labels strategies are also implemented. A pavement surface-based assessment of lane marking extraction by the four strategies is conducted in asphalt and concrete pavement areas covered by MMS equipped with multiple LiDAR scanners. Additionally, the extracted lane markings are used for lane width estimation and reporting lane marking gaps along various highways. The normalized intensity thresholding leads to a better lane marking extraction with an F1-score of 78.9% in comparison to the original intensity thresholding with an F1-score of 72.3%. On the other hand, the deep learning model trained with automatically generated labels achieves a higher F1-score of 85.9% than the one trained on manually established labels with an F1-score of 75.1%. In concrete pavement area, the normalized intensity thresholding and both deep learning strategies obtain better lane marking extraction (i.e., lane markings along longer segments of the highway have been extracted) than the original intensity thresholding approach. For the lane width results, more estimates are observed, especially in areas with poor edge lane marking, using the two deep learning models when compared with the intensity thresholding strategies due to the higher recall rates for the former. The outcome of the proposed strategies is used to develop a framework for reporting lane marking gap regions, which can be subsequently visualized in RGB imagery to identify their cause.


Sign in / Sign up

Export Citation Format

Share Document