scholarly journals MODELLING MEAN ALBEDO OF INDIVIDUAL ROOFS IN COMPLEX URBAN AREAS USING SATELLITE IMAGES AND AIRBORNE LASER SCANNING POINT CLOUDS

Author(s):  
B. Kalantar ◽  
S. Mansor ◽  
Z. Khuzaimah ◽  
M. Ibrahim Sameen ◽  
B. Pradhan

Knowledge of surface albedo at individual roof scale is important for mitigating urban heat islands and understanding urban climate change. This study presents a method for quantifying surface albedo of individual roofs in a complex urban area using the integration of Landsat 8 and airborne LiDAR data. First, individual roofs were extracted from airborne LiDAR data and orthophotos using optimized segmentation and supervised object based image analysis (OBIA). Support vector machine (SVM) was used as a classifier in OBIA process for extracting individual roofs. The user-defined parameters required in SVM classifier were selected using v-fold cross validation method. After that, surface albedo was calculated for each individual roof from Landsat images. Finally, thematic maps of mean surface albedo of individual roofs were generated in GIS and the results were discussed. Results showed that the study area is covered by 35% of buildings varying in roofing material types and conditions. The calculated surface albedo of buildings ranged from 0.16 to 0.65 in the study area. More importantly, the results indicated that the types and conditions of roofing materials significantly effect on the mean value of surface albedo. Mean albedo of new concrete, old concrete, new steel, and old steel were found to be equal to 0.38, 0.26, 0.51, and 0.44 respectively. Replacing old roofing materials with new ones should highly prioritized.

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Wuming Zhang ◽  
Shangshu Cai ◽  
Xinlian Liang ◽  
Jie Shao ◽  
Ronghai Hu ◽  
...  

Abstract Background The universal occurrence of randomly distributed dark holes (i.e., data pits appearing within the tree crown) in LiDAR-derived canopy height models (CHMs) negatively affects the accuracy of extracted forest inventory parameters. Methods We develop an algorithm based on cloth simulation for constructing a pit-free CHM. Results The proposed algorithm effectively fills data pits of various sizes whilst preserving canopy details. Our pit-free CHMs derived from point clouds at different proportions of data pits are remarkably better than those constructed using other algorithms, as evidenced by the lowest average root mean square error (0.4981 m) between the reference CHMs and the constructed pit-free CHMs. Moreover, our pit-free CHMs show the best performance overall in terms of maximum tree height estimation (average bias = 0.9674 m). Conclusion The proposed algorithm can be adopted when working with different quality LiDAR data and shows high potential in forestry applications.


2021 ◽  
Vol 13 (18) ◽  
pp. 3766
Author(s):  
Zhenyang Hui ◽  
Zhuoxuan Li ◽  
Penggen Cheng ◽  
Yao Yevenyo Ziggah ◽  
JunLin Fan

Building extraction from airborne Light Detection and Ranging (LiDAR) point clouds is a significant step in the process of digital urban construction. Although the existing building extraction methods perform well in simple urban environments, when encountering complicated city environments with irregular building shapes or varying building sizes, these methods cannot achieve satisfactory building extraction results. To address these challenges, a building extraction method from airborne LiDAR data based on multi-constraints graph segmentation was proposed in this paper. The proposed method mainly converted point-based building extraction into object-based building extraction through multi-constraints graph segmentation. The initial extracted building points were derived according to the spatial geometric features of different object primitives. Finally, a multi-scale progressive growth optimization method was proposed to recover some omitted building points and improve the completeness of building extraction. The proposed method was tested and validated using three datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that the proposed method can achieve the best building extraction results. It was also found that no matter the average quality or the average F1 score, the proposed method outperformed ten other investigated building extraction methods.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


2020 ◽  
Vol 12 (21) ◽  
pp. 3506
Author(s):  
Nuria Sanchez-Lopez ◽  
Luigi Boschetti ◽  
Andrew T. Hudak ◽  
Steven Hancock ◽  
Laura I. Duncanson

Stand-level maps of past forest disturbances (expressed as time since disturbance, TSD) are needed to model forest ecosystem processes, but the conventional approaches based on remotely sensed satellite data can only extend as far back as the first available satellite observations. Stand-level analysis of airborne LiDAR data has been demonstrated to accurately estimate long-term TSD (~100 years), but large-scale coverage of airborne LiDAR remains costly. NASA’s spaceborne LiDAR Global Ecosystem Dynamics Investigation (GEDI) instrument, launched in December 2018, is providing billions of measurements of tropical and temperate forest canopies around the globe. GEDI is a spatial sampling instrument and, as such, does not provide wall-to-wall data. GEDI’s lasers illuminate ground footprints, which are separated by ~600 m across-track and ~60 m along-track, so new approaches are needed to generate wall-to-wall maps from the discrete measurements. In this paper, we studied the feasibility of a data fusion approach between GEDI and Landsat for wall-to-wall mapping of TSD. We tested the methodology on a ~52,500-ha area located in central Idaho (USA), where an extensive record of stand-replacing disturbances is available, starting in 1870. GEDI data were simulated over the nominal two-year planned mission lifetime from airborne LiDAR data and used for TSD estimation using a random forest (RF) classifier. Image segmentation was performed on Landsat-8 data, obtaining image-objects representing forest stands needed for the spatial extrapolation of estimated TSD from the discrete GEDI locations. We quantified the influence of (1) the forest stand map delineation, (2) the sample size of the training dataset, and (3) the number of GEDI footprints per stand on the accuracy of estimated TSD. The results show that GEDI-Landsat data fusion would allow for TSD estimation in stands covering ~95% of the study area, having the potential to reconstruct the long-term disturbance history of temperate even-aged forests with accuracy (median root mean square deviation = 22.14 years, median BIAS = 1.70 years, 60.13% of stands classified within 10 years of the reference disturbance date) comparable to the results obtained in the same study area with airborne LiDAR.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


2020 ◽  
Vol 12 (9) ◽  
pp. 1363 ◽  
Author(s):  
Li Li ◽  
Jian Yao ◽  
Jingmin Tu ◽  
Xinyi Liu ◽  
Yinxuan Li ◽  
...  

The roof plane segmentation is one of the key issues for constructing accurate three-dimensional building models from airborne light detection and ranging (LiDAR) data. Region growing is one of the most widely used methods to detect roof planes. It first selects one point or region as a seed, and then iteratively expands to neighboring points. However, region growing has two problems. The first problem is that it is hard to select the robust seed points. The other problem is that it is difficult to detect the accurate boundaries between two roof planes. In this paper, to solve these two problems, we propose a novel approach to segment the roof planes from airborne LiDAR point clouds using hierarchical clustering and boundary relabeling. For the first problem, we first extract the initial set of robust planar patches via an octree-based method, and then apply the hierarchical clustering method to iteratively merge the adjacent planar patches belonging to the same plane until the merging cost exceeds a predefined threshold. These merged planar patches are regarded as the robust seed patches for the next region growing. The coarse roof planes are generated by adding the non-planar points into the seed patches in sequence using region growing. However, the boundaries of coarse roof planes may be inaccurate. To solve this problem, namely, the second problem, we refine the boundaries between adjacent coarse planes by relabeling the boundary points. At last, we can effectively extract high-quality roof planes with smooth and accurate boundaries from airborne LiDAR data. We conducted our experiments on two datasets captured from Vaihingen and Wuhan using Leica ALS50 and Trimble Harrier 68i, respectively. The experimental results show that our proposed approach outperforms several representative approaches in both visual quality and quantitative metrics.


2019 ◽  
Vol 11 (19) ◽  
pp. 2256 ◽  
Author(s):  
Jorge Martínez Sánchez ◽  
Álvaro Váquez Álvarez ◽  
David López Vilariño ◽  
Francisco Fernández Rivera ◽  
José Carlos Cabaleiro Domínguez ◽  
...  

Over the last two decades, a wide range of applications have been developed from Light Detection and Ranging (LiDAR) point clouds. Most LiDAR-derived products require the distinction between ground and non-ground points. Because of this, ground filtering its being one of the most studied topics in the literature and robust methods are nowadays available. However, these methods have been designed to work with offline data and they are generally not well suited for real-time scenarios. Aiming to address this issue, this paper proposes an efficient method for ground filtering of airborne LiDAR data based on scan-line processing. In our proposal, an iterative 1-D spline interpolation is performed in each scan line sequentially. The final spline knots of a scan line are taken into account for the next scan line, so that valuable 2-D information is also considered without compromising computational efficiency. Points are labelled into ground and non-ground by analysing their residuals to the final spline. When tested against synthetic ground truth, the method yields a mean kappa value of 88.59% and a mean total error of 0.50%. Experiments with real data also show satisfactory results under visual inspection. Performance tests on a workstation show that the method can process up to 1 million points per second. The original implementation was ported into a low-cost development board to demonstrate its feasibility to run in embedded systems, where throughput was improved by using programmable logic hardware acceleration. Analysis shows that real-time filtering is possible in a high-end board prototype, as it can process the amount of points per second that current lightweight scanners acquire with low-energy consumption.


2019 ◽  
Vol 11 (11) ◽  
pp. 1263 ◽  
Author(s):  
Wen Xiao ◽  
Aleksandra Zaforemska ◽  
Magdalena Smigaj ◽  
Yunsheng Wang ◽  
Rachel Gaulton

Airborne lidar has been widely used for forest characterization to facilitate forest ecological and management studies. With the availability of increasingly higher point density, individual tree delineation (ITD) from airborne lidar point clouds has become a popular yet challenging topic, due to the complexity and diversity of forests. One important step of ITD is segmentation, for which various methodologies have been studied. Among them, a long proven image segmentation method, mean shift, has been applied directly onto 3D points, and has shown promising results. However, there are variations among those who implemented the algorithm in terms of the kernel shape, adaptiveness and weighting. This paper provides a detailed assessment of the mean shift algorithm for the segmentation of airborne lidar data, and the effect of crown top detection upon the validation of segmentation results. The results from three different datasets revealed that a crown-shaped kernel consistently generates better results (up to 7 percent) than other variants, whereas weighting and adaptiveness do not warrant improvements.


2019 ◽  
pp. 17 ◽  
Author(s):  
I. Borlaf-Mena ◽  
M. A. Tanase ◽  
A. Gómez-Sal

<p>Dehesas are high value agroecosystems that benefit from the effect tree cover has on pastures. Such effect occurs when tree cover is incomplete and homogeneous. Tree cover may be characterized from field data or through visual interpretation of remote sensing data, both time-consuming tasks. An alternative is the extraction of tree cover from aerial imagery using automated methods, on spectral derivate products (i.e. NDVI) or LiDAR point clouds. This study focuses on assessing and comparing methods for tree cover estimation from high resolution orthophotos and airborne laser scanning (ALS). RGB image processing based on thresholding of the ‘Excess Green minus Excess Red’ index with the Otsu method produced acceptable results (80%), lower than that obtained by thresholding the digital canopy model obtained from the ALS data (87%) or when combining RGB and LiDAR data (87.5%). The RGB information was found to be useful for tree delineation, although very vulnerable to confusion with the grass or shrubs. The ALS based extraction suffered for less confusion as it differentiated between trees and the remaining vegetation using the height. These results show that analysis of historical orthophotographs may be successfully used to evaluate the effects of management changes while LiDAR data may provide a substantial increase in the accuracy for the latter period. Combining RGB and Lidar data did not result in significant improvements over using LIDAR data alone.</p>


Sign in / Sign up

Export Citation Format

Share Document