scholarly journals AGB Estimation in a Tropical Mountain Forest (TMF) by Means of RGB and Multispectral Images Using an Unmanned Aerial Vehicle (UAV)

2019 ◽  
Vol 11 (12) ◽  
pp. 1413 ◽  
Author(s):  
Víctor González-Jaramillo ◽  
Andreas Fries ◽  
Jörg Bendix

The present investigation evaluates the accuracy of estimating above-ground biomass (AGB) by means of two different sensors installed onboard an unmanned aerial vehicle (UAV) platform (DJI Inspire I) because the high costs of very high-resolution imagery provided by satellites or light detection and ranging (LiDAR) sensors often impede AGB estimation and the determination of other vegetation parameters. The sensors utilized included an RGB camera (ZENMUSE X3) and a multispectral camera (Parrot Sequoia), whose images were used for AGB estimation in a natural tropical mountain forest (TMF) in Southern Ecuador. The total area covered by the sensors included 80 ha at lower elevations characterized by a fast-changing topography and different vegetation covers. From the total area, a core study site of 24 ha was selected for AGB calculation, applying two different methods. The first method used the RGB images and applied the structure for motion (SfM) process to generate point clouds for a subsequent individual tree classification. Per the classification at tree level, tree height (H) and diameter at breast height (DBH) could be determined, which are necessary input parameters to calculate AGB (Mg ha−1) by means of a specific allometric equation for wet forests. The second method used the multispectral images to calculate the normalized difference vegetation index (NDVI), which is the basis for AGB estimation applying an equation for tropical evergreen forests. The obtained results were validated against a previous AGB estimation for the same area using LiDAR data. The study found two major results: (i) The NDVI-based AGB estimates obtained by multispectral drone imagery were less accurate due to the saturation effect in dense tropical forests, (ii) the photogrammetric approach using RGB images provided reliable AGB estimates comparable to expensive LiDAR surveys (R2: 0.85). However, the latter is only possible if an auxiliary digital terrain model (DTM) in very high resolution is available because in dense natural forests the terrain surface (DTM) is hardly detectable by passive sensors due to the canopy layer, which impedes ground detection.

2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Huawei Wan ◽  
Qiao Wang ◽  
Dong Jiang ◽  
Jingying Fu ◽  
Yipeng Yang ◽  
...  

Spartina alterniflorawas introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread ofSpartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area ofSpartina alterniflorawas 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection ofSpartina alterniflorainvasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population.


2020 ◽  
Vol 8 (4) ◽  
pp. 310-333
Author(s):  
Sowmya Natesan ◽  
Costas Armenakis ◽  
Udayalakshmi Vepakomma

Tree species identification at the individual tree level is crucial for forest operations and management, yet its automated mapping remains challenging. Emerging technology, such as the high-resolution imagery from unmanned aerial vehicles (UAV) that is now becoming part of every forester’s surveillance kit, can potentially provide a solution to better characterize the tree canopy. To address this need, we have developed an approach based on a deep Convolutional Neural Network (CNN) to classify forest tree species at the individual tree-level that uses high-resolution RGB images acquired from a consumer-grade camera mounted on a UAV platform. This work explores the ability of the Dense Convolutional Network (DenseNet) to classify commonly available economic coniferous tree species in eastern Canada. The network was trained using multitemporal images captured under varying acquisition parameters to include seasonal, temporal, illumination, and angular variability. Validation of this model using distinct images over a mixed-wood forest in Ontario, Canada, showed over 84% classification accuracy in distinguishing five predominant species of coniferous trees. The model remains highly robust even when using images taken during different seasons and times, and with varying illumination and angles.


Author(s):  
S. Natesan ◽  
C. Armenakis ◽  
U. Vepakomma

<p><strong>Abstract.</strong> Tree species classification at individual tree level is a challenging problem in forest management. Deep learning, a cutting-edge technology evolved from Artificial Intelligence, was seen to outperform other techniques when it comes to complex problems such as image classification. In this work, we present a novel method to classify forest tree species through high resolution RGB images acquired with a simple consumer grade camera mounted on a UAV platform using Residual Neural Networks. We used UAV RGB images acquired over three years that varied in numerous acquisition parameters such as season, time, illumination and angle to train the neural network. To begin with, we have experimented with limited data towards the identification of two pine species namely red pine and white pine from the rest of the species. We performed two experiments, first with the images from all three acquisition years and the second with images from only one acquisition year. In the first experiment, we obtained 80% classification accuracy when the trained network was tested on a distinct set of images and in the second experiment, we obtained 51% classification accuracy. As a part of this work, a novel dataset of high-resolution labelled tree species is generated that can be used to conduct further studies involving deep neural networks in forestry.</p>


Sign in / Sign up

Export Citation Format

Share Document