High-throughput analysis of the canopy traits in the worldwide olive germplasm bank of Córdoba using very high-resolution imagery acquired from unmanned aerial vehicle (UAV)

2021 ◽  
Vol 278 ◽  
pp. 109851
Author(s):  
Francisco J. Gómez-Gálvez ◽  
Daniel Pérez-Mohedano ◽  
Raúl de la Rosa-Navarro ◽  
Angjelina Belaj

2021 ◽  
Vol 13 (13) ◽  
pp. 2508
Author(s):  
Loredana Oreti ◽  
Diego Giuliarelli ◽  
Antonio Tomao ◽  
Anna Barbati

The importance of mixed forests is increasingly recognized on a scientific level, due to their greater productivity and efficiency in resource use, compared to pure stands. However, a reliable quantification of the actual spatial extent of mixed stands on a fine spatial scale is still lacking. Indeed, classification and mapping of mixed populations, especially with semi-automatic procedures, has been a challenging issue up to date. The main objective of this study is to evaluate the potential of Object-Based Image Analysis (OBIA) and Very-High-Resolution imagery (VHR) to detect and map mixed forests of broadleaves and coniferous trees with a Minimum Mapping Unit (MMU) of 500 m2. This study evaluates segmentation-based classification paired with non-parametric method K- nearest-neighbors (K-NN), trained with a dataset independent from the validation one. The forest area mapped as mixed forest canopies in the study area amounts to 11%, with an overall accuracy being equal to 85% and K of 0.78. Better levels of user and producer accuracies (85–93%) are reached in conifer and broadleaved dominated stands. The study findings demonstrate that the very high resolution images (0.20 m of spatial resolutions) can be reliably used to detect the fine-grained pattern of rare mixed forests, thus supporting the monitoring and management of forest resources also on fine spatial scales.





2018 ◽  
Vol 10 (11) ◽  
pp. 1768 ◽  
Author(s):  
Hui Yang ◽  
Penghai Wu ◽  
Xuedong Yao ◽  
Yanlan Wu ◽  
Biao Wang ◽  
...  

Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods.





2019 ◽  
Vol 11 (12) ◽  
pp. 1413 ◽  
Author(s):  
Víctor González-Jaramillo ◽  
Andreas Fries ◽  
Jörg Bendix

The present investigation evaluates the accuracy of estimating above-ground biomass (AGB) by means of two different sensors installed onboard an unmanned aerial vehicle (UAV) platform (DJI Inspire I) because the high costs of very high-resolution imagery provided by satellites or light detection and ranging (LiDAR) sensors often impede AGB estimation and the determination of other vegetation parameters. The sensors utilized included an RGB camera (ZENMUSE X3) and a multispectral camera (Parrot Sequoia), whose images were used for AGB estimation in a natural tropical mountain forest (TMF) in Southern Ecuador. The total area covered by the sensors included 80 ha at lower elevations characterized by a fast-changing topography and different vegetation covers. From the total area, a core study site of 24 ha was selected for AGB calculation, applying two different methods. The first method used the RGB images and applied the structure for motion (SfM) process to generate point clouds for a subsequent individual tree classification. Per the classification at tree level, tree height (H) and diameter at breast height (DBH) could be determined, which are necessary input parameters to calculate AGB (Mg ha−1) by means of a specific allometric equation for wet forests. The second method used the multispectral images to calculate the normalized difference vegetation index (NDVI), which is the basis for AGB estimation applying an equation for tropical evergreen forests. The obtained results were validated against a previous AGB estimation for the same area using LiDAR data. The study found two major results: (i) The NDVI-based AGB estimates obtained by multispectral drone imagery were less accurate due to the saturation effect in dense tropical forests, (ii) the photogrammetric approach using RGB images provided reliable AGB estimates comparable to expensive LiDAR surveys (R2: 0.85). However, the latter is only possible if an auxiliary digital terrain model (DTM) in very high resolution is available because in dense natural forests the terrain surface (DTM) is hardly detectable by passive sensors due to the canopy layer, which impedes ground detection.



Sign in / Sign up

Export Citation Format

Share Document