DEM modeling using RGB-based vegetation indices from UAV images

Author(s):  
Kyriacos Themistocleous
2015 ◽  
Vol 7 (4) ◽  
pp. 4026-4047 ◽  
Author(s):  
Sebastian Candiago ◽  
Fabio Remondino ◽  
Michaela De Giglio ◽  
Marco Dubbini ◽  
Mario Gattelli

2020 ◽  
Vol 12 (7) ◽  
pp. 1207 ◽  
Author(s):  
Jian Zhang ◽  
Chufeng Wang ◽  
Chenghai Yang ◽  
Tianjin Xie ◽  
Zhao Jiang ◽  
...  

The spatial resolution of in situ unmanned aerial vehicle (UAV) multispectral images has a crucial effect on crop growth monitoring and image acquisition efficiency. However, existing studies about optimal spatial resolution for crop monitoring are mainly based on resampled images. Therefore, the resampled spatial resolution in these studies might not be applicable to in situ UAV images. In order to obtain optimal spatial resolution of in situ UAV multispectral images for crop growth monitoring, a RedEdge Micasense 3 camera was installed onto a DJI M600 UAV flying at different heights of 22, 29, 44, 88, and 176m to capture images of seedling rapeseed with ground sampling distances (GSD) of 1.35, 1.69, 2.61, 5.73, and 11.61 cm, respectively. Meanwhile, the normalized difference vegetation index (NDVI) measured by a GreenSeeker (GS-NDVI) and leaf area index (LAI) were collected to evaluate the performance of nine vegetation indices (VIs) and VI*plant height (PH) at different GSDs for rapeseed growth monitoring. The results showed that the normalized difference red edge index (NDRE) had a better performance for estimating GS-NDVI (R2 = 0.812) and LAI (R2 = 0.717), compared with other VIs. Moreover, when GSD was less than 2.61 cm, the NDRE*PH derived from in situ UAV images outperformed the NDRE for LAI estimation (R2 = 0.757). At oversized GSD (≥5.73 cm), imprecise PH information and a large heterogeneity within the pixel (revealed by semi-variogram analysis) resulted in a large random error for LAI estimation by NDRE*PH. Furthermore, the image collection and processing time at 1.35 cm GSD was about three times as long as that at 2.61 cm. The result of this study suggested that NDRE*PH from UAV multispectral images with a spatial resolution around 2.61 cm could be a preferential selection for seedling rapeseed growth monitoring, while NDRE alone might have a better performance for low spatial resolution images.


2017 ◽  
Vol 8 (2) ◽  
pp. 817-822 ◽  
Author(s):  
A. Matese ◽  
S. F. Di Gennaro ◽  
C. Miranda ◽  
A. Berton ◽  
L.G. Santesteban

New remote sensing technologies have provided unprecedented results in vineyard monitoring. The aim of this work was to evaluate different sources of images and processing methodologies to describe spatial variability of spectral-based and canopy-based vegetation indices within a vineyard, and their relationship with productive and qualitative vine parameters. Comparison between image-derived indices from Sentinel 2 NDVI, unfiltered and filtered UAV NDVI and with agronomic features have been performed. UAV images allow calculating new non-spectral indices based on canopy architecture that provide additional and useful information to the growers with regards to within-vineyard management zone delineation.


2019 ◽  
Vol 11 (21) ◽  
pp. 2573 ◽  
Author(s):  
Salvatore Di Gennaro ◽  
Riccardo Dainelli ◽  
Alberto Palliotti ◽  
Piero Toscano ◽  
Alessandro Matese

Several remote sensing technologies have been tested in precision viticulture to characterize vineyard spatial variability, from traditional aircraft and satellite platforms to recent unmanned aerial vehicles (UAVs). Imagery processing is still a challenge due to the traditional row-based architecture, where the inter-row soil provides a high to full presence of mixed pixels. In this case, UAV images combined with filtering techniques represent the solution to analyze pure canopy pixels and were used to benchmark the effectiveness of Sentinel-2 (S2) performance in overhead training systems. At harvest time, UAV filtered and unfiltered images and ground sampling data were used to validate the correlation between the S2 normalized difference vegetation indices (NDVIs) with vegetative and productive parameters in two vineyards (V1 and V2). Regarding the UAV vs. S2 NDVI comparison, in both vineyards, satellite data showed a high correlation both with UAV unfiltered and filtered images (V1 R2 = 0.80 and V2 R2 = 0.60 mean values). Ground data and remote sensing platform NDVIs correlation were strong for yield and biomass in both vineyards (R2 from 0.60 to 0.95). These results demonstrate the effectiveness of spatial resolution provided by S2 on overhead trellis system viticulture, promoting precision viticulture also within areas that are currently managed without the support of innovative technologies.


2021 ◽  
Vol 52 (3) ◽  
pp. 601-610
Author(s):  
Qubaa & et al.

Unmanned Aerial Vehicles UAVs or Drones have made great progress in the field of aerial surveys to study vegetation and farmland. The research focuses on developing smart systems for managing agricultural fields, thus facilitating decision-making, increasing agricultural productivity, improving profitability and protecting the environment. The paper highlights the ability of drones to distinguish agricultural land intended for cultivation and classified as deserted or cultivated or in the germination stage. For the first time in the Nineveh governorate, a Phantom 4 DJI UAV images were used, in addition to using the spatialized Pix4Dfielde program to process these images. Four types of the standard agricultural indices that rely on the visible spectrum have been used (Visible Atmospherically Resistant Index (VARI), Triangular Greenness Index (TGI), Synthetic Normalized Differences Vegetation Index (S-NDVI) and Visible Difference Vegetation Index (VDVI)) to test UAVs images and to categorize different types of agricultural land. The results showed that when using the S-NDVI and VDVI indicators, the values 0.16 and 0.14 appeared respectively in certain areas, which indicates the presence and integrity of vegetation cover, unlike other regions, whose indicators showed 0.010 and -0.004, respectively, which indicate that the plant has a bad condition or its absence at all.  All results finding in this research reflect and confirm the validity of using UAVs images for agricultural field management and development.


Proceedings ◽  
2019 ◽  
Vol 30 (1) ◽  
pp. 17
Author(s):  
Brook ◽  
Micco ◽  
Battipaglia ◽  
Erbaggio ◽  
Ludeno ◽  
...  

Currently, the main goal of agriculture is to support the achievement of food security in a sustainable way through the improvement of use efficiency of farm resources, increasing crop yield and quality, under climate change conditions. Farm resources use improvement, as well as the reduction of soil degradation processes, can be realized by means of high spatial and temporal resolution of field crop monitoring, aiming to manage the local spatial variability. In the case of high incomes crops, as the vineyards for high-quality wines, the monitoring of spatial behavior of plants during the growing season represents an opportunity to improve the plant management, the farmer incomes and to preserve the environmental health. However, because the field monitoring is an additional cost for the farmer, its diffusion is slow down and with it the achievement of sustainable agriculture. In the last decades, the satellite multispectral images have been widely used for the management of large areas, with a limitation in observation due to the pre-defined and fixed scale with relatively coarse spatial resolution, resulting in restrictions in their application. This paper presents a modified multiscale full-connected convolutional neural network (CNN) as a practical tool for pan-sharpening of Sentinel-2A images by UAV images. The reconstructed data are validated by independent multispectral UAV images and in-situ spectral measurements, providing a multitemporal evaluation of plant responses through a set of selected vegetation indices. The proposed methodology has been tested on plant measurements taken either in-vivo and through the retrospective reconstruction of the eco-physiological vine behavior, by the evaluation of water conductivity and water use efficiency indexes from anatomical and isotopic traits recorded in vine stem wood. Such a methodology, able to evaluate with high spatial and temporal resolution the plant responses, combining the pro and cons of space-borne and UAVs data, has been applied in a vineyard of southern Italy by analyzing the period from 2015 to 2018. The obtained results have shown a good correspondence between the vegetation indices obtained from reconstructed Sentinel-2A data and plant measurements obtained from tree-ring based retrospective reconstruction of eco-physiological behavior.


Author(s):  
J. Liu ◽  
M. D. Hossain ◽  
D. Chen

Abstract. Wild parsnip is an invasive plant that has serious health risks to humans due to the toxin in its sap. Monitoring its presence has been a challenging task for conservation authorities due to its small size and irregular shape. Unmanned Aerial Vehicles (UAV) can obtain ultra-high resolution (UHR) imagery and have been used for vegetation monitoring in recent years. In this study, UAV images captured at Lemoine Point Conservation Area in Kingston, Ontario, are used to test a methodology for distinguishing wild parsnip. The objective of this study is to develop an efficient invasive wild parsnip classification workflow based on UHR digital UAV imagery. Image pre-processing flow includes image orientation, digital elevation model (DEM) and digital surface model (DSM) extractions, and orthomosaicking using Simactive’s software. Three vegetation indices and three texture features are calculated and added to the mosaicked images as additional bands. Image analysis frameworks namely pixel- and object-based method and three classifiers are tested and the object-based Support Vector Machine (SVM) is selected to distinguish wild parsnip from other vegetation types. The optimal image resolutions are undertaken by comparing accuracy assessments. The results provide an executable workflow to distinguish wild parsnip and show that UAV images, with a simple digital camera, are an appropriate and economic resource for small and irregular vegetation detection. This method yields reliable and valid outcomes in detecting wild parsnip plants and demonstrates excellent performance in mapping small vegetation.


Agronomy ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 102
Author(s):  
José A. Martínez-Casasnovas ◽  
Leire Sandonís-Pozo ◽  
Alexandre Escolà ◽  
Jaume Arnó ◽  
Jordi Llorens

One of the challenges in orchard management, in particular of hedgerow tree plantations, is the delineation of management zones on the bases of high-precision data. Along this line, the present study analyses the applicability of vegetation indices derived from UAV images to estimate the key structural and geometric canopy parameters of an almond orchard. In addition, the classes created on the basis of the vegetation indices were assessed to delineate potential management zones. The structural and geometric orchard parameters (width, height, cross-sectional area and porosity) were characterized by means of a LiDAR sensor, and the vegetation indices were derived from a UAV-acquired multispectral image. Both datasets summarized every 0.5 m along the almond tree rows and were used to interpolate continuous representations of the variables by means of geostatistical analysis. Linear and canonical correlation analyses were carried out to select the best performing vegetation index to estimate the structural and geometric orchard parameters in each cross-section of the tree rows. The results showed that NDVI averaged in each cross-section and normalized by its projected area achieved the highest correlations and served to define potential management zones. These findings expand the possibilities of using multispectral images in orchard management, particularly in hedgerow plantations.


2020 ◽  
Vol 12 (4) ◽  
pp. 633 ◽  
Author(s):  
Ming-Der Yang ◽  
Hsin-Hung Tseng ◽  
Yu-Chun Hsu ◽  
Hui Ping Tsai

A rapid and precise large-scale agricultural disaster survey is a basis for agricultural disaster relief and insurance but is labor-intensive and time-consuming. This study applies Unmanned Aerial Vehicles (UAVs) images through deep-learning image processing to estimate the rice lodging in paddies over a large area. This study establishes an image semantic segmentation model employing two neural network architectures, FCN-AlexNet, and SegNet, whose effects are explored in the interpretation of various object sizes and computation efficiency. Commercial UAVs imaging rice paddies in high-resolution visible images are used to calculate three vegetation indicators to improve the applicability of visible images. The proposed model was trained and tested on a set of UAV images in 2017 and was validated on a set of UAV images in 2019. For the identification of rice lodging on the 2017 UAV images, the F1-score reaches 0.80 and 0.79 for FCN-AlexNet and SegNet, respectively. The F1-score of FCN-AlexNet using RGB + ExGR combination also reaches 0.78 in the 2019 images for validation. The proposed model adopting semantic segmentation networks is proven to have better efficiency, approximately 10 to 15 times faster, and a lower misinterpretation rate than that of the maximum likelihood method.


Sign in / Sign up

Export Citation Format

Share Document