scholarly journals Mauritia flexuosa palm trees airborne mapping with deep convolutional neural network

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Luciene Sales Dagher Arce ◽  
Lucas Prado Osco ◽  
Mauro dos Santos de Arruda ◽  
Danielle Elis Garcia Furuya ◽  
Ana Paula Marques Ramos ◽  
...  

AbstractAccurately mapping individual tree species in densely forested environments is crucial to forest inventory. When considering only RGB images, this is a challenging task for many automatic photogrammetry processes. The main reason for that is the spectral similarity between species in RGB scenes, which can be a hindrance for most automatic methods. This paper presents a deep learning-based approach to detect an important multi-use species of palm trees (Mauritia flexuosa; i.e., Buriti) on aerial RGB imagery. In South-America, this palm tree is essential for many indigenous and local communities because of its characteristics. The species is also a valuable indicator of water resources, which comes as a benefit for mapping its location. The method is based on a Convolutional Neural Network (CNN) to identify and geolocate singular tree species in a high-complexity forest environment. The results returned a mean absolute error (MAE) of 0.75 trees and an F1-measure of 86.9%. These results are better than Faster R-CNN and RetinaNet methods considering equal experiment conditions. In conclusion, the method presented is efficient to deal with a high-density forest scenario and can accurately map the location of single species like the M. flexuosa palm tree and may be useful for future frameworks.

2021 ◽  
Author(s):  
Luciene Sales Daguer Arce ◽  
Lucas Prado Osco ◽  
Mauro dos Santos Arruda ◽  
Danielle Ellis Garcia Furuya ◽  
Ana Paula Marques Ramos ◽  
...  

Abstract Accurately mapping individual tree species in densely forested environments is crucial to forest inventory. When considering only RGB images, this is a challenging task for many automatic photogrammetry processes. The main reason for that is the spectral similarity between species in RGB scenes, which can be a hindrance for most automatic methods. This paper presents a deep learning-based approach to detect an important multi-use species of palm trees (Mauritia flexuosa; i.e., Buriti) on aerial RGB imagery. In South-America, this palm tree is essential for many indigenous and local communities because of its characteristics. The species is also a valuable indicator of water resources, which comes as a benefit for mapping its location. The method is based on a Convolutional Neural Network (CNN) to identify and geolocate singular tree species in a high-complexity forest environment. The results returned a mean absolute error (MAE) of 0.75 trees and an F1-measure of 86.9%. These results are better than Faster R-CNN and RetinaNet methods considering equal experiment conditions. In conclusion, the method presented is efficient to deal with a high-density forest scenario and can accurately map the location of single species like the M flexuosa palm tree and may be useful for future frameworks.


Author(s):  
Luciene Sales Dagher Arce ◽  
Mauro dos Santos de Arruda ◽  
Danielle Elis Garcia Furuya ◽  
Lucas Prado Osco ◽  
Ana Paula Marques Ramos ◽  
...  

Accurately mapping individual tree species in densely forested environments is crucial to forest inventory. When considering only RGB images, this is a challenging task for many automatic photogrammetry processes. The main reason for that is the spectral similarity between species in RGB scenes, which can be a hindrance for most automatic methods. State-of-the-art deep learning methods could be capable of identifying tree species with an attractive cost, accuracy, and computational load in RGB images. This paper presents a deep learning-based approach to detect an important multi-use species of palm trees (Mauritia flexuosa; i.e., Buriti) on aerial RGB imagery. In South-America, this palm tree is essential for many indigenous and local communities because of its characteristics. The species is also a valuable indicator of water resources, which comes as a benefit for mapping its location. The method is based on a Convolutional Neural Network (CNN) to identify and geolocate singular tree species in a high-complexity forest environment, and considers the likelihood of every pixel in the image to be recognized as a possible tree by implementing a confidence map feature extraction. This study compares the performance of the proposed method against state-of-the-art object detection networks. For this, images from a dataset composed of 1,394 airborne scenes, where 5,334 palm-trees were manually labeled, were used. The results returned a mean absolute error (MAE) of 0.75 trees and an F1-measure of 86.9%. These results are better than both Faster R-CNN and RetinaNet considering equal experiment conditions. The proposed network provided fast solutions to detect the palm trees, with a delivered image detection of 0.073 seconds and a standard deviation of 0.002 using the GPU. In conclusion, the method presented is efficient to deal with a high-density forest scenario and can accurately map the location of single species like the M flexuosa palm tree and may be useful for future frameworks.


Author(s):  
T. Mizoguchi ◽  
A. Ishii ◽  
H. Nakamura

<p><strong>Abstract.</strong> In this paper, we propose a new method for specifying individual tree species based on depth and curvature image creation from point cloud captured by terrestrial laser scanner and Convolutional Neural Network (CNN). Given a point cloud of an individual tree, the proposed method first extracts the subset of points corresponding to a trunk at breast-height. Then branches and leaves are removed from the extracted points by RANSAC -based circle fitting, and the depth image is created by globally fitting a cubic polynomial surface to the remaining trunk points. Furthermore, principal curvatures are estimated at each scanned point by locally fitting a quadratic surface to its neighbouring points. Depth images clearly capture the bark texture involved by its split and tear-off, but its computation is unstable and may fail to acquire bark shape in the resulting images. In contrast, curvature estimation enables stable computation of surface concavity and convexity, and thus it can well represent local geometry of bark texture in the curvature images. In comparison to the depth image, the curvature image enables accurate classification for slanted trees with many branches and leaves. We also evaluated the effectiveness of a multi-modal approach for species classification in which depth and curvature images are analysed together using CNN and support vector machine. We verified the superior performance of our proposed method for point cloud of Japanese cedar and cypress trees.</p>


2017 ◽  
Author(s):  
Tomohiro Mizoguchi ◽  
Akira Ishii ◽  
Hiroyuki Nakamura ◽  
Tsuyoshi Inoue ◽  
Hisashi Takamatsu

Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3595 ◽  
Author(s):  
Anderson Aparecido dos Santos ◽  
José Marcato Junior ◽  
Márcio Santos Araújo ◽  
David Robledo Di Martini ◽  
Everton Castelão Tetila ◽  
...  

Detection and classification of tree species from remote sensing data were performed using mainly multispectral and hyperspectral images and Light Detection And Ranging (LiDAR) data. Despite the comparatively lower cost and higher spatial resolution, few studies focused on images captured by Red-Green-Blue (RGB) sensors. Besides, the recent years have witnessed an impressive progress of deep learning methods for object detection. Motivated by this scenario, we proposed and evaluated the usage of Convolutional Neural Network (CNN)-based methods combined with Unmanned Aerial Vehicle (UAV) high spatial resolution RGB imagery for the detection of law protected tree species. Three state-of-the-art object detection methods were evaluated: Faster Region-based Convolutional Neural Network (Faster R-CNN), YOLOv3 and RetinaNet. A dataset was built to assess the selected methods, comprising 392 RBG images captured from August 2018 to February 2019, over a forested urban area in midwest Brazil. The target object is an important tree species threatened by extinction known as Dipteryx alata Vogel (Fabaceae). The experimental analysis delivered average precision around 92% with an associated processing times below 30 miliseconds.


2021 ◽  
Vol 13 (14) ◽  
pp. 2787
Author(s):  
Mohamed Barakat A. Gibril ◽  
Helmi Zulhaidi Mohd Shafri ◽  
Abdallah Shanableh ◽  
Rami Al-Ruzouq ◽  
Aimrun Wayayok ◽  
...  

Large-scale mapping of date palm trees is vital for their consistent monitoring and sustainable management, considering their substantial commercial, environmental, and cultural value. This study presents an automatic approach for the large-scale mapping of date palm trees from very-high-spatial-resolution (VHSR) unmanned aerial vehicle (UAV) datasets, based on a deep learning approach. A U-Shape convolutional neural network (U-Net), based on a deep residual learning framework, was developed for the semantic segmentation of date palm trees. A comprehensive set of labeled data was established to enable the training and evaluation of the proposed segmentation model and increase its generalization capability. The performance of the proposed approach was compared with those of various state-of-the-art fully convolutional networks (FCNs) with different encoder architectures, including U-Net (based on VGG-16 backbone), pyramid scene parsing network, and two variants of DeepLab V3+. Experimental results showed that the proposed model outperformed other FCNs in the validation and testing datasets. The generalizability evaluation of the proposed approach on a comprehensive and complex testing dataset exhibited higher classification accuracy and showed that date palm trees could be automatically mapped from VHSR UAV images with an F-score, mean intersection over union, precision, and recall of 91%, 85%, 0.91, and 0.92, respectively. The proposed approach provides an efficient deep learning architecture for the automatic mapping of date palm trees from VHSR UAV-based images.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Rongji Zhang ◽  
Feng Sun ◽  
Ziwen Song ◽  
Xiaolin Wang ◽  
Yingcui Du ◽  
...  

Traffic flow forecasting is the key to an intelligent transportation system (ITS). Currently, the short-term traffic flow forecasting methods based on deep learning need to be further improved in terms of accuracy and computational efficiency. Therefore, a short-term traffic flow forecasting model GA-TCN based on genetic algorithm (GA) optimized time convolutional neural network (TCN) is proposed in this paper. The prediction error was considered as the fitness value and the genetic algorithm was used to optimize the filters, kernel size, batch size, and dilations hyperparameters of the temporal convolutional neural network to determine the optimal fitness prediction model. Finally, the model was tested using the public dataset PEMS. The results showed that the average absolute error of the proposed GA-TCN decreased by 34.09%, 22.42%, and 26.33% compared with LSTM, GRU, and TCN in working days, while the average absolute error of the GA-TCN decreased by 24.42%, 2.33%, and 3.92% in weekend days, respectively. The results indicate that the model proposed in this paper has a better adaptability and higher prediction accuracy in short-term traffic flow forecasting compared with the existing models. The proposed model can provide important support for the formulation of a dynamic traffic control scheme.


2019 ◽  
Vol 11 (23) ◽  
pp. 2788 ◽  
Author(s):  
Uwe Knauer ◽  
Cornelius Styp von Rekowski ◽  
Marianne Stecklina ◽  
Tilman Krokotsch ◽  
Tuan Pham Minh ◽  
...  

In this paper, we evaluate different popular voting strategies for fusion of classifier results. A convolutional neural network (CNN) and different variants of random forest (RF) classifiers were trained to discriminate between 15 tree species based on airborne hyperspectral imaging data. The spectral data was preprocessed with a multi-class linear discriminant analysis (MCLDA) as a means to reduce dimensionality and to obtain spatial–spectral features. The best individual classifier was a CNN with a classification accuracy of 0.73 +/− 0.086. The classification performance increased to an accuracy of 0.78 +/− 0.053 by using precision weighted voting for a hybrid ensemble of the CNN and two RF classifiers. This voting strategy clearly outperformed majority voting (0.74), accuracy weighted voting (0.75), and presidential voting (0.75).


Sign in / Sign up

Export Citation Format

Share Document