scholarly journals A Double Swath Configuration for Improving Throughput and Accuracy of Trait Estimate from UAV Images

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Wenjuan Li ◽  
Alexis Comar ◽  
Marie Weiss ◽  
Sylvain Jay ◽  
Gallian Colombeau ◽  
...  

Multispectral observations from unmanned aerial vehicles (UAVs) are currently used for precision agriculture and crop phenotyping applications to monitor a series of traits allowing the characterization of the vegetation status. However, the limited autonomy of UAVs makes the completion of flights difficult when sampling large areas. Increasing the throughput of data acquisition while not degrading the ground sample distance (GSD) is, therefore, a critical issue to be solved. We propose here a new image acquisition configuration based on the combination of two focal length (f) optics: an optics with f=4.2 mm is added to the standard f=8 mm (SS: single swath) of the multispectral camera (DS: double swath, double of the standard one). Two flights were completed consecutively in 2018 over a maize field using the AIRPHEN multispectral camera at 52 m altitude. The DS flight plan was designed to get 80% overlap with the 4.2 mm optics, while the SS one was designed to get 80% overlap with the 8 mm optics. As a result, the time required to cover the same area is halved for the DS as compared to the SS. The georeferencing accuracy was improved for the DS configuration, particularly for the Z dimension due to the larger view angles available with the small focal length optics. Application to plant height estimates demonstrates that the DS configuration provides similar results as the SS one. However, for both the DS and SS configurations, degrading the quality level used to generate the 3D point cloud significantly decreases the plant height estimates.

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


Author(s):  
M. Possoch ◽  
S. Bieker ◽  
D. Hoffmeister ◽  
A. Bolten ◽  
J. Schellberg ◽  
...  

Remote sensing of crop biomass is important in regard to precision agriculture, which aims to improve nutrient use efficiency and to develop better stress and disease management. In this study, multi-temporal crop surface models (CSMs) were generated from UAV-based dense imaging in order to derive plant height distribution and to determine forage mass. The low-cost UAV-based RGB imaging was carried out in a grassland experiment at the University of Bonn, Germany, in summer 2015. The test site comprised three consecutive growths including six different nitrogen fertilizer levels and three replicates, in sum 324 plots with a size of 1.5×1.5 m. Each growth consisted of six harvesting dates. RGB-images and biomass samples were taken at twelve dates nearly biweekly within two growths between June and September 2015. Images were taken with a DJI Phantom 2 in combination of a 2D Zenmuse gimbal and a GoPro Hero 3 (black edition). Overlapping images were captured in 13 to 16 m and overview images in approximately 60 m height at 2 frames per second. The RGB vegetation index (RGBVI) was calculated as the normalized difference of the squared green reflectance and the product of blue and red reflectance from the non-calibrated images. The post processing was done with Agisoft PhotoScan Professional (SfM-based) and Esri ArcGIS. 14 ground control points (GCPs) were located in the field, distinguished by 30 cm × 30 cm markers and measured with a RTK-GPS (HiPer Pro Topcon) with 0.01 m horizontal and vertical precision. The errors of the spatial resolution in x-, y-, z-direction were in a scale of 3-4 cm. From each survey, also one distortion corrected image was georeferenced by the same GCPs and used for the RGBVI calculation. The results have been used to analyse and evaluate the relationship between estimated plant height derived with this low-cost UAV-system and forage mass. Results indicate that the plant height seems to be a suitable indicator for forage mass. There is a robust correlation of crop height related with dry matter (R² = 0.6). The RGBVI seems not to be a suitable indicator for forage mass in grassland, although the results provided a medium correlation by combining plant height and RGBVI to dry matter (R² = 0.5).


Author(s):  
P. O. Mc’Okeyo ◽  
F. Nex ◽  
C. Persello ◽  
A. Vrieling

Abstract. The application of UAV-based aerial imagery has advanced exponentially in the past two decades. This can be attributed to UAV operational flexibility, ultra-high spatial resolution, inexpensiveness, and UAV-based sensors enhancement. Nonetheless, the application of multitemporal series of multispectral UAV imagery still suffers significant misregistration errors, and therefore becoming a concern for applications such as precision agriculture. Direct image georeferencing and co-registration is commonly done using ground control points; this is usually costly and time consuming. This research proposes a novel approach for automatic co-registration of multitemporal UAV imagery using intensity-based keypoints. The Speeded Up Robust Features (SURF), Binary Robust Invariant Scalable Keypoints (BRISK), Maximally Stable Extremal Regions (MSER) and KAZE algorithms, were tested and parameters optimized. Image matching performance of these algorithms informed the decision to pursue further experiments with only SURF and KAZE. Optimally parametrized SURF and KAZE algorithms obtained co-registration accuracies of 0.1 and 0.3 pixels for intra-epoch and inter-epoch images respectively. To obtain better intra-epoch co-registration accuracy, collective band processing is advised whereas one-to-one matching strategy is recommended for inter-epoch co-registration. The results were tested using a maize crop monitoring case and the; comparison of spectral response of vegetation between the UAV sensors, Parrot Sequoia and Micro MCA was performed. Due to the missing incidence sensor, spectral and radiometric calibration of Micro MCA imagery is observed to be key in achieving optimal response. Also, the cameras have different specifications and thus differ in the quality of their respective photogrammetric outputs.


Agriculture ◽  
2019 ◽  
Vol 9 (11) ◽  
pp. 246 ◽  
Author(s):  
Baabak Mamaghani ◽  
M. Grady Saunders ◽  
Carl Salvaggio

With the inception of small unmanned aircraft systems (sUAS), remotely sensed images have been captured much closer to the ground, which has meant better resolution and smaller ground sample distances (GSDs). This has provided the precision agriculture community with the ability to analyze individual plants, and in certain cases, individual leaves on those plants. This has also allowed for a dramatic increase in data acquisition for agricultural analysis. Because satellite and manned aircraft remote sensing data collections had larger GSDs, self-shadowing was not seen as an issue for agricultural remote sensing. However, sUAS are able to image these shadows which can cause issues in data analysis. This paper investigates the inherent reflectance variability of vegetation by analyzing six Coneflower plants, as a surrogate for other cash crops, across different variables. These plants were measured under different forecasts (cloudy and sunny), at different times (08:00 a.m., 09:00 a.m., 10:00 a.m., 11:00 a.m. and 12:00 p.m.), and at different GSDs (2, 4 and 8 cm) using a field portable spectroradiometer (ASD Field Spec). In addition, a leafclip spectrometer was utilized to measure individual leaves on each plant in a controlled lab environment. These spectra were analyzed to determine if there was any significant difference in the health of the various plants measured. Finally, a MicaSense RedEdge-3 multispectral camera was utilized to capture images of the plants every hour to analyze the variability produced by a sensor designed for agricultural remote sensing. The RedEdge-3 was held stationary at 1.5 m above the plants while collecting all images, which produced a GSD of 0.1 cm/pixel. To produce 2, 4, and 8 cm GSD, the MicaSense RedEdge-3 would need to be at an altitude of 30.5 m, 61 m and 122 m respectively. This study did not take background effects into consideration for either the ASD or MicaSense. Results showed that GSD produced a statistically significant difference (p < 0.001) in Normalized Difference Vegetation Index (NDVI, a commonly used metric to determine vegetation health), R 2 values demonstrated a low correlation between time of day and NDVI, and a one-way ANOVA test showed no statistically significant difference in the NDVI computed from the leafclip probe (p-value of 0.018). Ultimately, it was determined that the best condition for measuring vegetation reflectance was on cloudy days near noon. Sunny days produced self-shadowing on the plants which increased the variability of the measured reflectance values (higher standard deviations in all five RedEdge-3 channels), and the shadowing of the plants decreased as time approached noon. This high reflectance variability in the coneflower plants made it difficult to accurately measure the NDVI.


2019 ◽  
Vol 11 (22) ◽  
pp. 2678 ◽  
Author(s):  
Zhu ◽  
Sun ◽  
Peng ◽  
Huang ◽  
Li ◽  
...  

Crop above-ground biomass (AGB) is a key parameter used for monitoring crop growth and predicting yield in precision agriculture. Estimating the crop AGB at a field scale through the use of unmanned aerial vehicles (UAVs) is promising for agronomic application, but the robustness of the methods used for estimation needs to be balanced with practical application. In this study, three UAV remote sensing flight missions (using a multiSPEC-4C multispectral camera, a Micasense RedEdge-M multispectral camera, and an Alpha Series AL3-32 Light Detection and Ranging (LiDAR) sensor onboard three different UAV platforms) were conducted above three long-term experimental plots with different tillage treatments in 2018. We investigated the performances of the multi-source UAV-based 3D point clouds at multi-spatial scales using the traditional multi-variable linear regression model (OLS), random forest (RF), backpropagation neural network (BP), and support vector machine (SVM) methods for accurate AGB estimation. Results showed that crop height (CH) was a robust proxy for AGB estimation, and that high spatial resolution in CH datasets helps to improve maize AGB estimation. Furthermore, the OLS, RF, BP, and SVM methods all maintained an acceptable accuracy for AGB estimation; however, the SVM and RF methods performed slightly more robustly. This study is expected to optimize UAV systems and algorithms for specific agronomic applications.


Author(s):  
S. Brocks ◽  
G. Bareth

Crop-Surface-Models (CSMs) are a useful tool for monitoring in-field crop growth variability, thus enabling precision agriculture which is necessary for achieving higher agricultural yields. This contribution provides a first assessment on the suitability of using consumer-grade smart cameras as sensors for the stereoscopic creation of crop-surface models using oblique imagery acquired from ground-based positions. An application that automates image acquisition and transmission was developed. Automated image acquisition took place throughout the growing period of barley in 2013. For three dates where both automated image acquisition and manual measurements of plant height were available, CSMs were generated using a combination of AgiSoft PhotoScan and Esri ArcGIS. The coefficient of determination <i>R</i><sup>2</sup> between the average of the manually measured plant heights per plots and the average height of the developed crop surface models was 0.61 (<i>n</i> = 24). The overall correlation between the manually measured heights and the CSM-derived heights is 0.78. The average per plot of the manually measured plant heights in the timeframe covered by the generated CSMs range from 19 to 95 cm, while the average plant height per plot of the generated CSMs range from 2.1 to 69 cm. These first results show that the presented approach is feasible.


Author(s):  
M. Possoch ◽  
S. Bieker ◽  
D. Hoffmeister ◽  
A. Bolten ◽  
J. Schellberg ◽  
...  

Remote sensing of crop biomass is important in regard to precision agriculture, which aims to improve nutrient use efficiency and to develop better stress and disease management. In this study, multi-temporal crop surface models (CSMs) were generated from UAV-based dense imaging in order to derive plant height distribution and to determine forage mass. The low-cost UAV-based RGB imaging was carried out in a grassland experiment at the University of Bonn, Germany, in summer 2015. The test site comprised three consecutive growths including six different nitrogen fertilizer levels and three replicates, in sum 324 plots with a size of 1.5×1.5 m. Each growth consisted of six harvesting dates. RGB-images and biomass samples were taken at twelve dates nearly biweekly within two growths between June and September 2015. Images were taken with a DJI Phantom 2 in combination of a 2D Zenmuse gimbal and a GoPro Hero 3 (black edition). Overlapping images were captured in 13 to 16 m and overview images in approximately 60 m height at 2 frames per second. The RGB vegetation index (RGBVI) was calculated as the normalized difference of the squared green reflectance and the product of blue and red reflectance from the non-calibrated images. The post processing was done with Agisoft PhotoScan Professional (SfM-based) and Esri ArcGIS. 14 ground control points (GCPs) were located in the field, distinguished by 30 cm × 30 cm markers and measured with a RTK-GPS (HiPer Pro Topcon) with 0.01 m horizontal and vertical precision. The errors of the spatial resolution in x-, y-, z-direction were in a scale of 3-4 cm. From each survey, also one distortion corrected image was georeferenced by the same GCPs and used for the RGBVI calculation. The results have been used to analyse and evaluate the relationship between estimated plant height derived with this low-cost UAV-system and forage mass. Results indicate that the plant height seems to be a suitable indicator for forage mass. There is a robust correlation of crop height related with dry matter (R² = 0.6). The RGBVI seems not to be a suitable indicator for forage mass in grassland, although the results provided a medium correlation by combining plant height and RGBVI to dry matter (R² = 0.5).


Author(s):  
Jayme Garcia Arnal Barbedo ◽  
Luciano Vieira Koenigkan ◽  
Thiago Teixeira Santos ◽  
Patrícia Menezes Santos

Unmanned Aerial Vehicles (UAVs) are being increasingly viewed as valuable tools to aid the management of farms. This kind of technology can be particularly useful in the context of extensive cattle farming, as production areas tend to be expansive and animals tend to be more loosely monitored. With the advent of deep learning, and Convolutional Neural Networks (CNNs) in particular, extracting relevant information from aerial images has become more effective. Despite the technological advancements in drone, imaging and machine learning technologies, the application of UAVs for cattle monitoring is far from being thoroughly studied, with many research gaps still remaining. In this context, the objectives of this study were threefold: 1) to determine the highest possible accuracy that could be achieved in the detection of animals of the Canchim breed, which is visually similar to the Nelore breed (\textit{Bos taurus indicus}); 2) to determine the ideal Ground Sample Distance (GSD) for animal detection; 3) to determine the most accurate CNN architecture for this specific problem. The experiments involved 1,853 images containing 8,629 samples of animals, and 15 different CNN architectures were tested. A total of 900 models were trained (15 CNN architectures * 3 spacial resolutions * 2 datasets * 10-fold cross validation), allowing for a deep analysis of the several aspects that impact the detection of cattle using aerial images captured using UAVs. Results revealed that many CNN architectures are robust enough to reliably detect animals in aerial images even under far from ideal conditions, indicating the viability of using UAVs for cattle monitoring.


Sign in / Sign up

Export Citation Format

Share Document