Estimation of leaf area index in onion (Allium cepa L.) using an unmanned aerial vehicle

2013 ◽  
Vol 115 (1) ◽  
pp. 31-42 ◽  
Author(s):  
Juan I. Córcoles ◽  
Jose F. Ortega ◽  
David Hernández ◽  
Miguel A. Moreno
2021 ◽  
Author(s):  
Shuang Wu ◽  
Lei Deng ◽  
Lijie Guo ◽  
Yanjie Wu

Abstract Background: Leaf Area Index (LAI) is half of the amount of leaf area per unit horizontal ground surface area. Consequently, accurate vegetation extraction in remote sensing imagery is critical for LAI estimation. However, most studies do not fully exploit the advantages of Unmanned Aerial Vehicle (UAV) imagery with high spatial resolution, such as not removing the background (soil and shadow, etc.). Furthermore, the advancement of multi-sensor synchronous observation and integration technology allows for the simultaneous collection of canopy spectral, structural, and thermal data, making it possible for data fusion.Methods: To investigate the potential of high-resolution UAV imagery combined with multi-sensor data fusion in LAI estimation. High-resolution UAV imagery was obtained with a multi-sensor integrated MicaSense Altum camera to extract the wheat canopy's spectral, structural, and thermal features. After removing the soil background, all features were fused, and LAI was estimated using Random Forest and Support Vector Machine Regression.Result: The results show that: (1) the soil background reduced the accuracy of the LAI prediction, and soil background could be effectively removed by taking advantage of high-resolution UAV imagery. After removing the soil background, the LAI prediction accuracy improved significantly, R2 raised by about 0.27, and RMSE fell by about 0.476. (2) The fusion of multi-sensor synchronous observation data improved LAI prediction accuracy and achieved the best accuracy (R2 = 0.815 and RMSE = 1.023). (3) When compared to other variables, 23 CHM, NRCT, NDRE, and BLUE are crucial for LAI estimation. Even the simple Multiple Linear Regression model could achieve high prediction accuracy (R2 = 0.679 and RMSE = 1.231), providing inspiration for rapid and efficient LAI prediction.Conclusions: The method of this study can be transferred to other sites with more extensive areas or similar agriculture structures, which will facilitate agricultural production and management.


2020 ◽  
Vol 13 (1) ◽  
pp. 84
Author(s):  
Tomoaki Yamaguchi ◽  
Yukie Tanaka ◽  
Yuto Imachi ◽  
Megumi Yamashita ◽  
Keisuke Katsura

Leaf area index (LAI) is a vital parameter for predicting rice yield. Unmanned aerial vehicle (UAV) surveillance with an RGB camera has been shown to have potential as a low-cost and efficient tool for monitoring crop growth. Simultaneously, deep learning (DL) algorithms have attracted attention as a promising tool for the task of image recognition. The principal aim of this research was to evaluate the feasibility of combining DL and RGB images obtained by a UAV for rice LAI estimation. In the present study, an LAI estimation model developed by DL with RGB images was compared to three other practical methods: a plant canopy analyzer (PCA); regression models based on color indices (CIs) obtained from an RGB camera; and vegetation indices (VIs) obtained from a multispectral camera. The results showed that the estimation accuracy of the model developed by DL with RGB images (R2 = 0.963 and RMSE = 0.334) was higher than those of the PCA (R2 = 0.934 and RMSE = 0.555) and the regression models based on CIs (R2 = 0.802-0.947 and RMSE = 0.401–1.13), and comparable to that of the regression models based on VIs (R2 = 0.917–0.976 and RMSE = 0.332–0.644). Therefore, our results demonstrated that the estimation model using DL with an RGB camera on a UAV could be an alternative to the methods using PCA and a multispectral camera for rice LAI estimation.


2021 ◽  
Author(s):  
Weiping Kong ◽  
Wenjiang Huang ◽  
Lingling Ma ◽  
Binbin Chen ◽  
Chuanrong Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document