scholarly journals Canopy Extraction and Height Estimation of Trees in a Shelter Forest Based on Fusion of an Airborne Multispectral Image and Photogrammetric Point Cloud

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xuewen Wang ◽  
Qingzhan Zhao ◽  
Feng Han ◽  
Jianxin Zhang ◽  
Ping Jiang

To reduce data acquisition cost, this study proposed a novel method of individual tree height estimation and canopy extraction based on fusion of an airborne multispectral image and photogrammetric point cloud. A fixed-wing drone was deployed to acquire the true color and multispectral images of a shelter forest. The Structure-from-Motion (SfM) algorithm was used to reconstruct the 3D point cloud of the canopy. The 3D point cloud was filtered to acquire the ground point cloud and then interpolated to a Digital Elevation Model (DEM) using the Radial Basis Function Neural Network (RBFNN). The DEM was subtracted from the Digital Surface Model (DSM) generated from the original point cloud to get the canopy height model (CHM). The CHM was processed for the crown extraction using local maximum filters and watershed segmentation. Then, object-oriented methods were employed in the combination of 12 bands and CHM for image segmentation. To extract the tree crown, the Support Vector Machine (SVM) algorithm was used. The result of the object-oriented method was vectorized and superimposed on the CHM to estimate the tree height. Experimental results demonstrated that it is efficient to employ point cloud and the proposed approach has great potential in the tree height estimation. The proposed object-oriented method based on fusion of a multispectral image and CHM effectively reduced the oversegmentation and undersegmentation, with an increase in the F -score by 0.12–0.17. Our findings provided a reference for the health and change monitoring of shelter forests as well.

2020 ◽  
Vol 50 (10) ◽  
pp. 1012-1024
Author(s):  
Meimei Wang ◽  
Jiayuan Lin

Individual tree height (ITH) is one of the most important vertical structure parameters of a forest. Field measurement and laser scanning are very expensive for large forests. In this paper, we propose a cost-effective method to acquire ITHs in a forest using the optical overlapping images captured by an unmanned aerial vehicle (UAV). The data sets, including a point cloud, a digital surface model (DSM), and a digital orthorectified map (DOM), were produced from the UAV imagery. The canopy height model (CHM) was obtained by subtracting the digital elevation model (DEM) from the DSM removed of low vegetation. Object-based image analysis was used to extract individual tree crowns (ITCs) from the DOM, and ITHs were initially extracted by overlaying ITC outlines on the CHM. As the extracted ITHs were generally slightly shorter than the measured ITHs, a linear relationship was established between them. The final ITHs of the test site were retrieved by inputting extracted ITHs into the linear regression model. As a result, the coefficient of determination (R2), the root mean square error (RMSE), the mean absolute error (MAE), and the mean relative error (MRE) of the retrieved ITHs against the measured ITHs were 0.92, 1.08 m, 0.76 m, and 0.08, respectively.


Forests ◽  
2019 ◽  
Vol 10 (10) ◽  
pp. 931 ◽  
Author(s):  
Hongyu Huang ◽  
Shaodong He ◽  
Chongcheng Chen

Tree height is an important vegetative structural parameter, and its accurate estimation is of significant ecological and commercial value. We collected UAV images of six tree species distributed throughout a subtropical campus during three periods from March to late May, during which some deciduous trees shed all of their leaves and then regrew, while other evergreen trees kept some of their leaves. The UAV imagery was processed by computer vision and photogrammetric software to generate a three-dimensional dense point cloud. Individual tree height information extracted from the dense photogrammetric point cloud was validated against the manually measured reference data. We found that the number of leaves in the canopy affected tree height estimation, especially for deciduous trees. During leaf-off conditions or the early season, when leaves were absent or sparse, it was difficult to reconstruct the 3D canopy structure fully from the UAV images, thus resulting in the underestimation of tree height; the accuracy improved considerably when there were more leaves. For Terminalia mantaly and Ficus virens, the root mean square errors (RMSEs) of tree height estimation reduced from 2.894 and 1.433 m (leaf-off) to 0.729 and 0.597 m (leaf-on), respectively. We provide direct evidence that leaf-on conditions have a positive effect on tree height measurements derived from UAV photogrammetric point clouds. This finding has important implications for forest monitoring, management, and change detection analysis.


Author(s):  
N. Kerle ◽  
F. Nex ◽  
D. Duarte ◽  
A. Vetrivel

<p><strong>Abstract.</strong> Structural disaster damage detection and characterisation is one of the oldest remote sensing challenges, and the utility of virtually every type of active and passive sensor deployed on various air- and spaceborne platforms has been assessed. The proliferation and growing sophistication of UAV in recent years has opened up many new opportunities for damage mapping, due to the high spatial resolution, the resulting stereo images and derivatives, and the flexibility of the platform. We have addressed the problem in the context of two European research projects, RECONASS and INACHUS. In this paper we synthesize and evaluate the progress of 6 years of research focused on advanced image analysis that was driven by progress in computer vision, photogrammetry and machine learning, but also by constraints imposed by the needs of first responder and other civil protection end users. The projects focused on damage to individual buildings caused by seismic activity but also explosions, and our work centred on the processing of 3D point cloud information acquired from stereo imagery. Initially focusing on the development of both supervised and unsupervised damage detection methods built on advanced texture features and basic classifiers such as Support Vector Machine and Random Forest, the work moved on to the use of deep learning. In particular the coupling of image-derived features and 3D point cloud information in a Convolutional Neural Network (CNN) proved successful in detecting also subtle damage features. In addition to the detection of standard rubble and debris, CNN-based methods were developed to detect typical façade damage indicators, such as cracks and spalling, including with a focus on multi-temporal and multi-scale feature fusion. We further developed a processing pipeline and mobile app to facilitate near-real time damage mapping. The solutions were tested in a number of pilot experiments and evaluated by a variety of stakeholders.</p>


2020 ◽  
Vol 12 (11) ◽  
pp. 1808 ◽  
Author(s):  
Miłosz Mielcarek ◽  
Agnieszka Kamińska ◽  
Krzysztof Stereńczak

The rapid developments in the field of digital aerial photogrammetry (DAP) in recent years have increased interest in the application of DAP data for extracting three-dimensional (3D) models of forest canopies. This technology, however, still requires further investigation to confirm its reliability in estimating forest attributes in complex forest conditions. The main purpose of this study was to evaluate the accuracy of tree height estimation based on a crown height model (CHM) generated from the difference between a DAP-derived digital surface model (DSM) and an airborne laser scanning (ALS)-derived digital terrain model (DTM). The tree heights determined based on the DAP-CHM were compared with ground-based measurements and heights obtained using ALS data only (ALS-CHM). Moreover, tree- and stand-related factors were examined to evaluate the potential influence on the obtained discrepancies between ALS- and DAP-derived heights. The obtained results indicate that the differences between the means of field-measured heights and DAP-derived heights were statistically significant. The root mean square error (RMSE) calculated in the comparison of field heights and DAP-derived heights was 1.68 m (7.34%). The results obtained for the CHM generated using only ALS data produced slightly lower errors, with RMSE = 1.25 m (5.46%) on average. Both ALS and DAP displayed the tendency to underestimate tree heights compared to those measured in the field; however, DAP produced a higher bias (1.26 m) than ALS (0.88 m). Nevertheless, DAP heights were highly correlated with the heights measured in the field (R2 = 0.95) and ALS-derived heights (R2 = 0.97). Tree species and height difference (the difference between the reference tree height and mean tree height in a sample plot) had the greatest influence on the differences between ALS- and DAP-derived heights. Our study confirms that a CHM computed based on the difference between a DAP-derived DSM and an ALS-derived DTM can be successfully used to measure the height of trees in the upper canopy layer.


Author(s):  
K. T Chang ◽  
C. Lin ◽  
Y. C. Lin ◽  
J. K. Liu

Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.


Author(s):  
S. D. Jawak ◽  
S. N. Panditrao ◽  
A. J. Luis

This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM. The CHM or the normalized DSM represents the absolute height of all aboveground urban features relative to the ground. After normalization, the elevation value of a point indicates the height from the ground to the point. The above-ground points were used for tree feature and building footprint extraction. In individual tree extraction, first and last return point clouds were used along with the bare earth and building footprint models discussed above. In this study, scene dependent extraction criteria were employed to improve the 3D feature extraction process. LiDAR-based refining/ filtering techniques used for bare earth layer extraction were crucial for improving the subsequent 3D features (tree and building) feature extraction. The PAN-sharpened WV-2 image (with 0.5 m spatial resolution) was used to assess the accuracy of LiDAR-based 3D feature extraction. Our analysis provided an accuracy of 98 % for tree feature extraction and 96 % for building feature extraction from LiDAR data. This study could extract total of 15143 tree features using CHM method, out of which total of 14841 were visually interpreted on PAN-sharpened WV-2 image data. The extracted tree features included both shadowed (total 13830) and non-shadowed (total 1011). We note that CHM method could overestimate total of 302 tree features, which were not observed on the WV-2 image. One of the potential sources for tree feature overestimation was observed in case of those tree features which were adjacent to buildings. In case of building feature extraction, the algorithm could extract total of 6117 building features which were interpreted on WV-2 image, even capturing buildings under the trees (total 605) and buildings under shadow (total 112). Overestimation of tree and building features was observed to be limiting factor in 3D feature extraction process. This is due to the incorrect filtering of point cloud in these areas. One of the potential sources of overestimation was the man-made structures, including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5295 ◽  
Author(s):  
Guoxiang Sun ◽  
Yongqian Ding ◽  
Xiaochan Wang ◽  
Wei Lu ◽  
Ye Sun ◽  
...  

Measurement of plant nitrogen (N), phosphorus (P), and potassium (K) levels are important for determining precise fertilization management approaches for crops cultivated in greenhouses. To accurately, rapidly, stably, and nondestructively measure the NPK levels in tomato plants, a nondestructive determination method based on multispectral three-dimensional (3D) imaging was proposed. Multiview RGB-D images and multispectral images were synchronously collected, and the plant multispectral reflectance was registered to the depth coordinates according to Fourier transform principles. Based on the Kinect sensor pose estimation and self-calibration, the unified transformation of the multiview point cloud coordinate system was realized. Finally, the iterative closest point (ICP) algorithm was used for the precise registration of multiview point clouds and the reconstruction of plant multispectral 3D point cloud models. Using the normalized grayscale similarity coefficient, the degree of spectral overlap, and the Hausdorff distance set, the accuracy of the reconstructed multispectral 3D point clouds was quantitatively evaluated, the average value was 0.9116, 0.9343 and 0.41 cm, respectively. The results indicated that the multispectral reflectance could be registered to the Kinect depth coordinates accurately based on the Fourier transform principles, the reconstruction accuracy of the multispectral 3D point cloud model met the model reconstruction needs of tomato plants. Using back-propagation artificial neural network (BPANN), support vector machine regression (SVMR), and gaussian process regression (GPR) methods, determination models for the NPK contents in tomato plants based on the reflectance characteristics of plant multispectral 3D point cloud models were separately constructed. The relative error (RE) of the N content by BPANN, SVMR and GPR prediction models were 2.27%, 7.46% and 4.03%, respectively. The RE of the P content by BPANN, SVMR and GPR prediction models were 3.32%, 8.92% and 8.41%, respectively. The RE of the K content by BPANN, SVMR and GPR prediction models were 3.27%, 5.73% and 3.32%, respectively. These models provided highly efficient and accurate measurements of the NPK contents in tomato plants. The NPK contents determination performance of these models were more stable than those of single-view models.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
MingFang Zhang ◽  
Rui Fu ◽  
YingShi Guo ◽  
Li Wang

Moving object classification is essential for autonomous vehicle to complete high-level tasks like scene understanding and motion planning. In this paper, we propose a novel approach for classifying moving objects into four classes of interest using 3D point cloud in urban traffic environment. Unlike most existing work on object recognition which involves dense point cloud, our approach combines extensive feature extraction with the multiframe classification optimization to solve the classification task when partial occlusion occurs. First, the point cloud of moving object is segmented by a data preprocessing procedure. Then, the efficient features are selected via Gini index criterion applied to the extended feature set. Next, Bayes Decision Theory (BDT) is employed to incorporate the preliminary results from posterior probability Support Vector Machine (SVM) classifier at consecutive frames. The point cloud data acquired from our own LIDAR as well as public KITTI dataset is used to validate the proposed moving object classification method in the experiments. The results show that the proposed SVM-BDT classifier based on 18 selected features can effectively recognize the moving objects.


2020 ◽  
Vol 12 (5) ◽  
pp. 885 ◽  
Author(s):  
Juan Picos ◽  
Guillermo Bastos ◽  
Daniel Míguez ◽  
Laura Alonso ◽  
Julia Armesto

The present study addresses the tree counting of a Eucalyptus plantation, the most widely planted hardwood in the world. Unmanned aerial vehicle (UAV) light detection and ranging (LiDAR) was used for the estimation of Eucalyptus trees. LiDAR-based estimation of Eucalyptus is a challenge due to the irregular shape and multiple trunks. To overcome this difficulty, the layer of the point cloud containing the stems was automatically classified and extracted according to the height thresholds, and those points were horizontally projected. Two different procedures were applied on these points. One is based on creating a buffer around each single point and combining the overlapping resulting polygons. The other one consists of a two-dimensional raster calculated from a kernel density estimation with an axis-aligned bivariate quartic kernel. Results were assessed against the manual interpretation of the LiDAR point cloud. Both methods yielded a detection rate (DR) of 103.7% and 113.6%, respectively. Results of the application of the local maxima filter to the canopy height model (CHM) intensely depends on the algorithm and the CHM pixel size. Additionally, the height of each tree was calculated from the CHM. Estimates of tree height produced from the CHM was sensitive to spatial resolution. A resolution of 2.0 m produced a R2 and a root mean square error (RMSE) of 0.99 m and 0.34 m, respectively. A finer resolution of 0.5 m produced a more accurate height estimation, with a R2 and a RMSE of 0.99 and 0.44 m, respectively. The quality of the results is a step toward precision forestry in eucalypt plantations.


Sign in / Sign up

Export Citation Format

Share Document