scholarly journals Roof Shape Classification from LiDAR and Satellite Image Data Fusion Using Supervised Learning

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3960 ◽  
Author(s):  
Jeremy Castagno ◽  
Ella Atkins

Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings particularly in urban regions. However, building roof structure is not fully provided in maps. This paper proposes a method to automatically label building roof shape from publicly available GIS data. Satellite imagery and airborne LiDAR data are processed and manually labeled to create a diverse annotated roof image dataset for small to large urban cities. Multiple convolutional neural network (CNN) architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. Satellite image and LiDAR data fusion is shown to provide greater classification accuracy than using either data type alone. Model confidence thresholds are adjusted leading to significant increases in models precision. Networks trained from roof data in Witten, Germany and Manhattan (New York City) are evaluated on independent data from these cities and Ann Arbor, Michigan.

Author(s):  
A. P. Dal Poz

This paper compares the paradigms of LiDAR and aerophotogrammetry in the context of building extraction and briefly discusses a photogrammetric strategy for refining building roof polyhedrons previously extracted from LiDAR data. In general, empirical and theoretical studies have confirmed that LiDAR-based methodologies are more suitable in extracting planar roof faces and ridges of the roof, whereas the aerophotogrammetry are more suitable in extracting building roof outlines. In order to exemplify how to explore these properties, it is presented a photogrammetric method for refining 3D building roof contours extracted from airborne LiDAR data. Examples of application are provided for this refining approach.


Author(s):  
Evangelos Maltezos ◽  
Charalabos Ioannidis

This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.


Author(s):  
H. Rastiveis ◽  
N. Khodaverdi zahraee ◽  
A. Jouybari

<p><strong>Abstract.</strong> The collapse of buildings during the earthquake is a major cause of human casualties. Furthermore, the threat of earthquakes will increase with growing urbanization and millions of people will be vulnerable to earthquakes. Therefore, building damage detection has gained increasing attention from the scientific community. The advent of Light Detection And Ranging (LiDAR) technique makes it possible to detect and assess building damage in the aftermath of earthquake disasters using this data. The purpose of this paper is to propose and implement an object-based approach for mapping destructed buildings after an earthquake using LiDAR data. For this purpose, first, multi-resolution segmentation of post-event LiDAR data is done after building extraction from pre-event building vector map. Then obtained image objects from post-event LiDAR data is located on the pre-event satellite image. After that, appropriate features, which make a better difference between damage and undamaged buildings, are calculated for all the image objects on both data. Finally, appropriate training samples are selected and imported into the object-based support vector machine (SVM) classification technique for detecting the building damage areas. The proposed method was tested on the data set after the 2010 earthquake of Port-au-Prince, Haiti. Quantitative evaluation of results shows the overall accuracy of 92&amp;thinsp;% by this method.</p>


2020 ◽  
Vol 12 (21) ◽  
pp. 3506
Author(s):  
Nuria Sanchez-Lopez ◽  
Luigi Boschetti ◽  
Andrew T. Hudak ◽  
Steven Hancock ◽  
Laura I. Duncanson

Stand-level maps of past forest disturbances (expressed as time since disturbance, TSD) are needed to model forest ecosystem processes, but the conventional approaches based on remotely sensed satellite data can only extend as far back as the first available satellite observations. Stand-level analysis of airborne LiDAR data has been demonstrated to accurately estimate long-term TSD (~100 years), but large-scale coverage of airborne LiDAR remains costly. NASA’s spaceborne LiDAR Global Ecosystem Dynamics Investigation (GEDI) instrument, launched in December 2018, is providing billions of measurements of tropical and temperate forest canopies around the globe. GEDI is a spatial sampling instrument and, as such, does not provide wall-to-wall data. GEDI’s lasers illuminate ground footprints, which are separated by ~600 m across-track and ~60 m along-track, so new approaches are needed to generate wall-to-wall maps from the discrete measurements. In this paper, we studied the feasibility of a data fusion approach between GEDI and Landsat for wall-to-wall mapping of TSD. We tested the methodology on a ~52,500-ha area located in central Idaho (USA), where an extensive record of stand-replacing disturbances is available, starting in 1870. GEDI data were simulated over the nominal two-year planned mission lifetime from airborne LiDAR data and used for TSD estimation using a random forest (RF) classifier. Image segmentation was performed on Landsat-8 data, obtaining image-objects representing forest stands needed for the spatial extrapolation of estimated TSD from the discrete GEDI locations. We quantified the influence of (1) the forest stand map delineation, (2) the sample size of the training dataset, and (3) the number of GEDI footprints per stand on the accuracy of estimated TSD. The results show that GEDI-Landsat data fusion would allow for TSD estimation in stands covering ~95% of the study area, having the potential to reconstruct the long-term disturbance history of temperate even-aged forests with accuracy (median root mean square deviation = 22.14 years, median BIAS = 1.70 years, 60.13% of stands classified within 10 years of the reference disturbance date) comparable to the results obtained in the same study area with airborne LiDAR.


2020 ◽  
Vol 6 (2) ◽  
pp. 122-138
Author(s):  
Oluibukun Gbenga Ajayi ◽  
Mark Palmer

This study presents the effect of image data sources on the topographic modelling of part of the National Trust site located at Weston-Super-Mare, Bristol, United Kingdom, covering an approximate area of 1.82 hectares. The accuracy of the DEM generated from 1m resolution and 2m resolution LiDAR data together with the accuracy of the DEM generated from the UAV images acquired at different altitudes are analysed using the 1 m LiDAR DEM as reference for the accuracy assessment. Using the NSSDA methodology, the horizontal and vertical accuracy of the DEMs generated from each of the four sources were computed while the paired sample t-test was conducted to ascertain the existence of statistically significant difference between the means of the X, Y, and Z coordinates of the check points. The result obtained shows that with a RMSE of -0.0101499 and horizontal accuracy of -0.175674686m, the planimetric coordinates extracted from 2 m LiDAR DEM were more accurate than the planimetric coordinates extracted from the UAV based DEMs while the UAV based DEMs proved to be more accurate than the 2m LiDAR DEM in terms of altimetric coordinates, though the DEM generated from UAV images acquired at 50 m altitude gave the most accurate result when compared with the vertical accuracy obtained from the DEM generated from UAV images acquired at 30 m and 70 m flight heights. These findings are also consistent with the result of the statistical analysis at 95% confidence interval.


2021 ◽  
Vol 13 (17) ◽  
pp. 3428
Author(s):  
Hangkai You ◽  
Shihua Li ◽  
Yifan Xu ◽  
Ze He ◽  
Di Wang

Tree information in urban areas plays a significant role in many fields of study, such as ecology and environmental management. Airborne LiDAR scanning (ALS) excels at the fast and efficient acquisition of spatial information in urban-scale areas. Tree extraction from ALS data is an essential part of tree structural studies. Current raster-based methods that use canopy height models (CHMs) suffer from the loss of 3D structure information, whereas the existing point-based methods are non-robust in complex environments. Aiming at making full use of the canopy’s 3D structure information that is provided by point cloud data, and ensuring the method’s suitability in complex scenes, this paper proposes a new point-based method for tree extraction that is based on 3D morphological features. Considering the elevation deviations of the ALS data, we propose a neighborhood search method to filter out the ground and flat-roof points. A coarse extraction method, combining planar projection with a point density-filtering algorithm is applied to filter out distracting objects, such as utility poles and cars. After that, a Euclidean cluster extraction (ECE) algorithm is used as an optimization strategy for coarse extraction. In order to verify the robustness and accuracy of the method, airborne LiDAR data from Zhangye, Gansu, China and unmanned aircraft vehicle (UAV) LiDAR data from Xinyang, Henan, China were tested in this study. The experimental results demonstrated that our method was suitable for extracting trees in complex urban scenes with either high or low point densities. The extraction accuracy obtained for the airborne LiDAR data and UAV LiDAR data were 99.4% and 99.2%, respectively. In addition, a further study found that the aberrant vertical structure of the artificially pruned canopy was the main cause of the error. Our method achieved desirable results in different scenes, with only one adjustable parameter, making it an easy-to-use method for urban area studies.


Fire ◽  
2022 ◽  
Vol 5 (1) ◽  
pp. 5
Author(s):  
Michael J. Campbell ◽  
Philip E. Dennison ◽  
Matthew P. Thompson ◽  
Bret W. Butler

Safety zones (SZs) are critical tools that can be used by wildland firefighters to avoid injury or fatality when engaging a fire. Effective SZs provide safe separation distance (SSD) from surrounding flames, ensuring that a fire’s heat cannot cause burn injury to firefighters within the SZ. Evaluating SSD on the ground can be challenging, and underestimating SSD can be fatal. We introduce a new online tool for mapping SSD based on vegetation height, terrain slope, wind speed, and burning condition: the Safe Separation Distance Evaluator (SSDE). It allows users to draw a potential SZ polygon and estimate SSD and the extent to which that SZ polygon may be suitable, given the local landscape, weather, and fire conditions. We begin by describing the algorithm that underlies SSDE. Given the importance of vegetation height for assessing SSD, we then describe an analysis that compares LANDFIRE Existing Vegetation Height and a recent Global Ecosystem Dynamics Investigation (GEDI) and Landsat 8 Operational Land Imager (OLI) satellite image-driven forest height dataset to vegetation heights derived from airborne lidar data in three areas of the Western US. This analysis revealed that both LANDFIRE and GEDI/Landsat tended to underestimate vegetation heights, which translates into an underestimation of SSD. To rectify this underestimation, we performed a bias-correction procedure that adjusted vegetation heights to more closely resemble those of the lidar data. SSDE is a tool that can provide valuable safety information to wildland fire personnel who are charged with the critical responsibility of protecting the public and landscapes from increasingly intense and frequent fires in a changing climate. However, as it is based on data that possess inherent uncertainty, it is essential that all SZ polygons evaluated using SSDE are validated on the ground prior to use.


Sign in / Sign up

Export Citation Format

Share Document