Tree species classification using deep learning and RGB optical images obtained by an unmanned aerial vehicle

Author(s):  
Chen Zhang ◽  
Kai Xia ◽  
Hailin Feng ◽  
Yinhui Yang ◽  
Xiaochen Du
Forests ◽  
2019 ◽  
Vol 10 (9) ◽  
pp. 818
Author(s):  
Yanbiao Xi ◽  
Chunying Ren ◽  
Zongming Wang ◽  
Shiqing Wei ◽  
Jialing Bai ◽  
...  

The accurate characterization of tree species distribution in forest areas can help significantly reduce uncertainties in the estimation of ecosystem parameters and forest resources. Deep learning algorithms have become a hot topic in recent years, but they have so far not been applied to tree species classification. In this study, one-dimensional convolutional neural network (Conv1D), a popular deep learning algorithm, was proposed to automatically identify tree species using OHS-1 hyperspectral images. Additionally, the random forest (RF) classifier was applied to compare to the algorithm of deep learning. Based on our experiments, we drew three main conclusions: First, the OHS-1 hyperspectral images used in this study have high spatial resolution (10 m), which reduces the influence of mixed pixel effect and greatly improves the classification accuracy. Second, limited by the amount of sample data, Conv1D-based classifier does not need too many layers to achieve high classification accuracy. In addition, the size of the convolution kernel has a great influence on the classification accuracy. Finally, the accuracy of Conv1D (85.04%) is higher than that of RF model (80.61%). Especially for broadleaf species with similar spectral characteristics, such as Manchurian walnut and aspen, the accuracy of Conv1D-based classifier is significantly higher than RF classifier (87.15% and 71.77%, respectively). Thus, the Conv1D-based deep learning framework combined with hyperspectral imagery can efficiently improve the accuracy of tree species classification and has great application prospects in the future.


2020 ◽  
Author(s):  
Sarah Kentsch ◽  
Maximo Larry Lopez Caceres ◽  
Yago Diez Donoso

<p>Forests become more important in times of changing climate, increasing demand of renewable energies and natural resources, as well as the high demand of information for economical and management issues. Several previous studies were carried out in the field of forest plantations but there is still a gap in knowledge when it comes to natural mixed forests, which are ecological complex due to varying distributions and interaction of different species. The applicability of Unmanned Aerial Vehicles (UAVs) for forest applications by using image analysis became a common tool because it is cost-efficient, time-saving and usable on a large-scale. Additionally, technologies like Deep Learning (DL) fasten the proceeding of a high number of images. Deep learning is a relatively new tool in forest applications and especially in the case of natural dense mixed forests in Japan. Our approach is to introduce the DL-based ResNet50 network for automatic tree species classification and segmentation, which uses transfer learning to reduce the amount of required data. A comparison between the ResNet50 algorithm and the common UNet algorithm, as well as a quantitative analysis of model setups are presented in this study. Furthermore, the data were analysed regarding difficulties and opportunities. We showed the outperformance of UNet with a DICE coefficient of 0.6667 for deciduous trees and 0.892 for evergreen trees, while ResNet 50 was reaching 0.733 and 0.855. A refinement of the segmentation was performed by the watershed algorithm increasing the DICE coefficient to values of up to 0.777 and 0.873. The results of the transfer learning analysis confirmed the increasing accuracy by adding image classification data basis for the model training. We were able to reduce the number of images required for the application. Therefore, the study showed the applicability and effectiveness of those techniques for classification approaches. Furthermore, we were able to reduce the training time by 16 times for the ResNet 50 performance and by 3.6 times with the watershed approach in comparison to the UNet algorithm. To the best of our knowledge this is the first study using deep learning applications for forestry research in Japan and the first study dealing with images of natural dense mixed forests.</p>


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1284 ◽  
Author(s):  
Sean Hartling ◽  
Vasit Sagan ◽  
Paheding Sidike ◽  
Maitiniyazi Maimaitijiang ◽  
Joshua Carron

Urban areas feature complex and heterogeneous land covers which create challenging issues for tree species classification. The increased availability of high spatial resolution multispectral satellite imagery and LiDAR datasets combined with the recent evolution of deep learning within remote sensing for object detection and scene classification, provide promising opportunities to map individual tree species with greater accuracy and resolution. However, there are knowledge gaps that are related to the contribution of Worldview-3 SWIR bands, very high resolution PAN band and LiDAR data in detailed tree species mapping. Additionally, contemporary deep learning methods are hampered by lack of training samples and difficulties of preparing training data. The objective of this study was to examine the potential of a novel deep learning method, Dense Convolutional Network (DenseNet), to identify dominant individual tree species in a complex urban environment within a fused image of WorldView-2 VNIR, Worldview-3 SWIR and LiDAR datasets. DenseNet results were compared against two popular machine classifiers in remote sensing image analysis, Random Forest (RF) and Support Vector Machine (SVM). Our results demonstrated that: (1) utilizing a data fusion approach beginning with VNIR and adding SWIR, LiDAR, and panchromatic (PAN) bands increased the overall accuracy of the DenseNet classifier from 75.9% to 76.8%, 81.1% and 82.6%, respectively. (2) DenseNet significantly outperformed RF and SVM for the classification of eight dominant tree species with an overall accuracy of 82.6%, compared to 51.8% and 52% for SVM and RF classifiers, respectively. (3) DenseNet maintained superior performance over RF and SVM classifiers under restricted training sample quantities which is a major limiting factor for deep learning techniques. Overall, the study reveals that DenseNet is more effective for urban tree species classification as it outperforms the popular RF and SVM techniques when working with highly complex image scenes regardless of training sample size.


Measurement ◽  
2021 ◽  
pp. 109301
Author(s):  
Maohua Liu ◽  
Ziwei Han ◽  
Yiming Chen ◽  
Zhengjun Liu ◽  
Yanshun Han

2020 ◽  
Vol 12 (7) ◽  
pp. 1128 ◽  
Author(s):  
Kaili Cao ◽  
Xiaoli Zhang

Tree species classification is important for the management and sustainable development of forest resources. Traditional object-oriented tree species classification methods, such as support vector machines, require manual feature selection and generally low accuracy, whereas deep learning technology can automatically extract image features to achieve end-to-end classification. Therefore, a tree classification method based on deep learning is proposed in this study. This method combines the semantic segmentation network U-Net and the feature extraction network ResNet into an improved Res-UNet network, where the convolutional layer of the U-Net network is represented by the residual unit of ResNet, and linear interpolation is used instead of deconvolution in each upsampling layer. At the output of the network, conditional random fields are used for post-processing. This network model is used to perform classification experiments on airborne orthophotos of Nanning Gaofeng Forest Farm in Guangxi, China. The results are then compared with those of U-Net and ResNet networks. The proposed method exhibits higher classification accuracy with an overall classification accuracy of 87%. Thus, the proposed model can effectively implement forest tree species classification and provide new opportunities for tree species classification in southern China.


2021 ◽  
Vol 13 (23) ◽  
pp. 4750
Author(s):  
Jianchang Chen ◽  
Yiming Chen ◽  
Zhengjun Liu

We propose the Point Cloud Tree Species Classification Network (PCTSCN) to overcome challenges in classifying tree species from laser data with deep learning methods. The network is mainly composed of two parts: a sampling component in the early stage and a feature extraction component in the later stage. We used geometric sampling to extract regions with local features from the tree contours since these tend to be species-specific. Then we used an improved Farthest Point Sampling method to extract the features from a global perspective. We input the intensity of the tree point cloud as a dimensional feature and spatial information into the neural network and mapped it to higher dimensions for feature extraction. We used the data obtained by Terrestrial Laser Scanning (TLS) and Unmanned Aerial Vehicle Laser Scanning (UAVLS) to conduct tree species classification experiments of white birch and larch. The experimental results showed that in both the TLS and UAVLS datasets, the input tree point cloud density and the highest feature dimensionality of the mapping had an impact on the classification accuracy of the tree species. When the single tree sample obtained by TLS consisted of 1024 points and the highest dimension of the network mapping was 512, the classification accuracy of the trained model reached 96%. For the individual tree samples obtained by UAVLS, which consisted of 2048 points and had the highest dimension of the network mapping of 1024, the classification accuracy of the trained model reached 92%. TLS data tree species classification accuracy of PCTSCN was improved by 2–9% compared with other models using the same point density, amount of data and highest feature dimension. The classification accuracy of tree species obtained by UAVLS was up to 8% higher. We propose PCTSCN to provide a new strategy for the intelligent classification of forest tree species.


2021 ◽  
pp. 319-324
Author(s):  
Elizaveta K. Sakharova ◽  
Dana D. Nurlyeva ◽  
Antonina A. Fedorova ◽  
Alexey R. Yakubov ◽  
Anton I. Kanev

Sign in / Sign up

Export Citation Format

Share Document