Tree Species Classification and Mapping Based on Deep Transfer Learning with Unmanned Aerial Vehicle High Resolution Images

2019 ◽  
Vol 56 (7) ◽  
pp. 072801
Author(s):  
滕文秀 Teng Wenxiu ◽  
温小荣 Wen Xiaorong ◽  
王妮 Wang Ni ◽  
施慧慧 Shi Huihui
2020 ◽  
Author(s):  
Sarah Kentsch ◽  
Maximo Larry Lopez Caceres ◽  
Yago Diez Donoso

<p>Forests become more important in times of changing climate, increasing demand of renewable energies and natural resources, as well as the high demand of information for economical and management issues. Several previous studies were carried out in the field of forest plantations but there is still a gap in knowledge when it comes to natural mixed forests, which are ecological complex due to varying distributions and interaction of different species. The applicability of Unmanned Aerial Vehicles (UAVs) for forest applications by using image analysis became a common tool because it is cost-efficient, time-saving and usable on a large-scale. Additionally, technologies like Deep Learning (DL) fasten the proceeding of a high number of images. Deep learning is a relatively new tool in forest applications and especially in the case of natural dense mixed forests in Japan. Our approach is to introduce the DL-based ResNet50 network for automatic tree species classification and segmentation, which uses transfer learning to reduce the amount of required data. A comparison between the ResNet50 algorithm and the common UNet algorithm, as well as a quantitative analysis of model setups are presented in this study. Furthermore, the data were analysed regarding difficulties and opportunities. We showed the outperformance of UNet with a DICE coefficient of 0.6667 for deciduous trees and 0.892 for evergreen trees, while ResNet 50 was reaching 0.733 and 0.855. A refinement of the segmentation was performed by the watershed algorithm increasing the DICE coefficient to values of up to 0.777 and 0.873. The results of the transfer learning analysis confirmed the increasing accuracy by adding image classification data basis for the model training. We were able to reduce the number of images required for the application. Therefore, the study showed the applicability and effectiveness of those techniques for classification approaches. Furthermore, we were able to reduce the training time by 16 times for the ResNet 50 performance and by 3.6 times with the watershed approach in comparison to the UNet algorithm. To the best of our knowledge this is the first study using deep learning applications for forestry research in Japan and the first study dealing with images of natural dense mixed forests.</p>


Author(s):  
S. Natesan ◽  
C. Armenakis ◽  
U. Vepakomma

<p><strong>Abstract.</strong> Tree species classification at individual tree level is a challenging problem in forest management. Deep learning, a cutting-edge technology evolved from Artificial Intelligence, was seen to outperform other techniques when it comes to complex problems such as image classification. In this work, we present a novel method to classify forest tree species through high resolution RGB images acquired with a simple consumer grade camera mounted on a UAV platform using Residual Neural Networks. We used UAV RGB images acquired over three years that varied in numerous acquisition parameters such as season, time, illumination and angle to train the neural network. To begin with, we have experimented with limited data towards the identification of two pine species namely red pine and white pine from the rest of the species. We performed two experiments, first with the images from all three acquisition years and the second with images from only one acquisition year. In the first experiment, we obtained 80% classification accuracy when the trained network was tested on a distinct set of images and in the second experiment, we obtained 51% classification accuracy. As a part of this work, a novel dataset of high-resolution labelled tree species is generated that can be used to conduct further studies involving deep neural networks in forestry.</p>


Author(s):  
Clément Dechesne ◽  
Clément Mallet ◽  
Arnaud Le Bris ◽  
Valérie Gouet ◽  
Alexandre Hervieu

Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., &ge; 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with &alpha;-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).


2020 ◽  
Vol 12 (23) ◽  
pp. 3892
Author(s):  
Sebastian Egli ◽  
Martin Höpke

Data on the distribution of tree species are often requested by forest managers, inventory agencies, foresters as well as private and municipal forest owners. However, the automated detection of tree species based on passive remote sensing data from aerial surveys is still not sufficiently developed to achieve reliable results independent of the phenological stage, time of day, season, tree vitality and prevailing atmospheric conditions. Here, we introduce a novel tree species classification approach based on high resolution RGB image data gathered during automated UAV flights that overcomes these insufficiencies. For the classification task, a computationally lightweight convolutional neural network (CNN) was designed. We show that with the chosen CNN model architecture, average classification accuracies of 92% can be reached independently of the illumination conditions and the phenological stages of four different tree species. We also show that a minimal ground sampling density of 1.6 cm/px is needed for the classification model to be able to make use of the spatial-structural information in the data. Finally, to demonstrate the applicability of the presented approach to derive spatially explicit tree species information, a gridded product is generated that yields an average classification accuracy of 88%.


Sign in / Sign up

Export Citation Format

Share Document