scholarly journals Evapotranspiration Estimation with Small UAVs in Precision Agriculture

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6427
Author(s):  
Haoyu Niu ◽  
Derek Hollenbeck ◽  
Tiebiao Zhao ◽  
Dong Wang ◽  
YangQuan Chen

Estimating evapotranspiration (ET) has been one of the most critical research areas in agriculture because of water scarcity, the growing population, and climate change. The accurate estimation and mapping of ET are necessary for crop water management. Traditionally, researchers use water balance, soil moisture, weighing lysimeters, or an energy balance approach, such as Bowen ratio or eddy covariance towers to estimate ET. However, these ET methods are point-specific or area-weighted measurements and cannot be extended to a large scale. With the advent of satellite technology, remote sensing images became able to provide spatially distributed measurements. However, the spatial resolution of multispectral satellite images is in the range of meters, tens of meters, or hundreds of meters, which is often not enough for crops with clumped canopy structures, such as trees and vines. Unmanned aerial vehicles (UAVs) can mitigate these spatial and temporal limitations. Lightweight cameras and sensors can be mounted on the UAVs and take high-resolution images. Unlike satellite imagery, the spatial resolution of the UAV images can be at the centimeter-level. UAVs can also fly on-demand, which provides high temporal imagery. In this study, the authors examined different UAV-based approaches of ET estimation at first. Models and algorithms, such as mapping evapotranspiration at high resolution with internalized calibration (METRIC), the two-source energy balance (TSEB) model, and machine learning (ML) are analyzed and discussed herein. Second, challenges and opportunities for UAVs in ET estimation are also discussed, such as uncooled thermal camera calibration, UAV image collection, and image processing. Then, the authors share views on ET estimation with UAVs for future research and draw conclusive remarks.

Author(s):  
Haoyu Niu ◽  
Tiebiao Zhao ◽  
Dong Wang ◽  
YangQuan Chen

Estimating evapotranspiration (ET) has been one of the most important research in agriculture recently because of water scarcity, growing population, and climate change. ET is the sum of evaporation from the soil and transpiration from the crops to the atmosphere. The accurate estimation and mapping of ET are necessary for crop water management. Traditionally, people use weighing lysimeters, Bowen ratio, eddy covariance and many other methods to estimate ET. However, these ET methods are points or location-specific measurements and cannot be extended to a large scale of ET estimation. With the advent of satellites technology, remote sensing images can provide spatially distributed measurements. The satellites multispectral images spatial resolution, however, is in the range of meters, which is often not enough for crops with clumped canopy structure such as trees and vines. And, the timing or frequency of satellites overpass is not always enough to meet the research or water management needs. The Unmanned Aerial Vehicles (UAVs), commonly referred to as drones, can help solve these spatial and temporal challenges. Lightweight cameras and sensors can be mounted on drones and take high-resolution images on a large scale of field. Compared with satellites images, the spatial resolution of UAVs’ images can be as high as 1 cm per pixel. And, people can fly a drone at any time if the weather condition is good. Cloud cover is less of a concern than satellite remote sensing. Both temporal and spatial resolution is highly improved by drones. In this paper, a review of different UAVs based approaches of ET estimations are presented. Different modified models used by UAVs, such as Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC), Two-source energy balance (TSEB) model, etc, are also discussed.


2017 ◽  
Author(s):  
Imme Benedict ◽  
Chiel C. van Heerwaarden ◽  
Albrecht H. Weerts ◽  
Wilco Hazeleger

Abstract. The hydrological cycle of river basins can be simulated by combining global climate models (GCMs) and global hydrological models (GHMs). The spatial resolution of these models is restricted by computational resources and therefore limits the processes and level of detail that can be resolved. To further improve simulations of precipitation and river-runoff on a global scale, we assess and compare the benefits of an increased resolution for a GCM and a GHM. We focus on the Rhine and Mississippi basin. Increasing the resolution of a GCM (1.125° to 0.25°) results in more realistic large-scale circulation patterns over the Rhine and an improved precipitation budget. These improvements with increased resolution are not found for the Mississippi basin, most likely because precipitation is strongly dependent on the representation of still unresolved convective processes. Increasing the resolution of vegetation and orography in the high resolution GHM (from 0.5° to 0.05°) shows no significant differences in discharge for both basins, because the hydrological processes depend highly on other parameter values that are not readily available at high resolution. Therefore, increasing the resolution of the GCM provides the most straightforward route to better results. This route works best for basins driven by large-scale precipitation, such as the Rhine basin. For basins driven by convective processes, such as the Mississippi basin, improvements are expected with even higher resolution convection permitting models.


2015 ◽  
Vol 6 (1) ◽  
pp. 61-81 ◽  
Author(s):  
L. Gerlitz ◽  
O. Conrad ◽  
J. Böhner

Abstract. The heterogeneity of precipitation rates in high-mountain regions is not sufficiently captured by state-of-the-art climate reanalysis products due to their limited spatial resolution. Thus there exists a large gap between the available data sets and the demands of climate impact studies. The presented approach aims to generate spatially high resolution precipitation fields for a target area in central Asia, covering the Tibetan Plateau and the adjacent mountain ranges and lowlands. Based on the assumption that observed local-scale precipitation amounts are triggered by varying large-scale atmospheric situations and modified by local-scale topographic characteristics, the statistical downscaling approach estimates local-scale precipitation rates as a function of large-scale atmospheric conditions, derived from the ERA-Interim reanalysis and high-resolution terrain parameters. Since the relationships of the predictor variables with local-scale observations are rather unknown and highly nonlinear, an artificial neural network (ANN) was utilized for the development of adequate transfer functions. Different ANN architectures were evaluated with regard to their predictive performance. The final downscaling model was used for the cellwise estimation of monthly precipitation sums, the number of rainy days and the maximum daily precipitation amount with a spatial resolution of 1 km2. The model was found to sufficiently capture the temporal and spatial variations in precipitation rates in the highly structured target area and allows for a detailed analysis of the precipitation distribution. A concluding sensitivity analysis of the ANN model reveals the effect of the atmospheric and topographic predictor variables on the precipitation estimations in the climatically diverse subregions.


Author(s):  
Pattabiraman V. ◽  
Parvathi R.

Natural data erupting directly out of various data sources, such as text, image, video, audio, and sensor data, comes with an inherent property of having very large dimensions or features of the data. While these features add richness and perspectives to the data, due to sparsity associated with them, it adds to the computational complexity while learning, unable to visualize and interpret them, thus requiring large scale computational power to make insights out of it. This is famously called “curse of dimensionality.” This chapter discusses the methods by which curse of dimensionality is cured using conventional methods and analyzes its performance for given complex datasets. It also discusses the advantages of nonlinear methods over linear methods and neural networks, which could be a better approach when compared to other nonlinear methods. It also discusses future research areas such as application of deep learning techniques, which can be applied as a cure for this curse.


1997 ◽  
Vol 35 (4) ◽  
pp. 11-15 ◽  
Author(s):  
Seyhmus Baloglu ◽  
David Brinberg

The destination image and positioning studies in tourism have been limited to those dealing with the image's perceptual or cognitive component. This study examined the applicability of Russel and his colleagues' proposed affective space structure to large-scale environments (i.e., tourism destination countries) as well as its potential as a positioning structure to study affective images of tourism destinations. The multidimensional scaling analysis of 11 Mediterranean countries along with proposed affective space structure indicated that Russel and his colleagues' proposed affective space can also be applied to places that are not perceived directly. It also showed potential for studying the affective image positioning of tourism destinations. The article concludes with some theoretical and practical implications and future research areas regarding tourism destination images.


2021 ◽  
Vol 13 (21) ◽  
pp. 4220
Author(s):  
Yu Tao ◽  
Jan-Peter Muller ◽  
Siting Xiong ◽  
Susan J. Conway

The High-Resolution Imaging Science Experiment (HiRISE) onboard the Mars Reconnaissance Orbiter provides remotely sensed imagery at the highest spatial resolution at 25–50 cm/pixel of the surface of Mars. However, due to the spatial resolution being so high, the total area covered by HiRISE targeted stereo acquisitions is very limited. This results in a lack of the availability of high-resolution digital terrain models (DTMs) which are better than 1 m/pixel. Such high-resolution DTMs have always been considered desirable for the international community of planetary scientists to carry out fine-scale geological analysis of the Martian surface. Recently, new deep learning-based techniques that are able to retrieve DTMs from single optical orbital imagery have been developed and applied to single HiRISE observational data. In this paper, we improve upon a previously developed single-image DTM estimation system called MADNet (1.0). We propose optimisations which we collectively call MADNet 2.0, which is based on a supervised image-to-height estimation network, multi-scale DTM reconstruction, and 3D co-alignment processes. In particular, we employ optimised single-scale inference and multi-scale reconstruction (in MADNet 2.0), instead of multi-scale inference and single-scale reconstruction (in MADNet 1.0), to produce more accurate large-scale topographic retrieval with boosted fine-scale resolution. We demonstrate the improvements of the MADNet 2.0 DTMs produced using HiRISE images, in comparison to the MADNet 1.0 DTMs and the published Planetary Data System (PDS) DTMs over the ExoMars Rosalind Franklin rover’s landing site at Oxia Planum. Qualitative and quantitative assessments suggest the proposed MADNet 2.0 system is capable of producing pixel-scale DTM retrieval at the same spatial resolution (25 cm/pixel) of the input HiRISE images.


2021 ◽  
Vol 8 ◽  
Author(s):  
Xue Liu ◽  
Temilola E. Fatoyinbo ◽  
Nathan M. Thomas ◽  
Weihe Wendy Guan ◽  
Yanni Zhan ◽  
...  

Coastal mangrove forests provide important ecosystem goods and services, including carbon sequestration, biodiversity conservation, and hazard mitigation. However, they are being destroyed at an alarming rate by human activities. To characterize mangrove forest changes, evaluate their impacts, and support relevant protection and restoration decision making, accurate and up-to-date mangrove extent mapping at large spatial scales is essential. Available large-scale mangrove extent data products use a single machine learning method commonly with 30 m Landsat imagery, and significant inconsistencies remain among these data products. With huge amounts of satellite data involved and the heterogeneity of land surface characteristics across large geographic areas, finding the most suitable method for large-scale high-resolution mangrove mapping is a challenge. The objective of this study is to evaluate the performance of a machine learning ensemble for mangrove forest mapping at 20 m spatial resolution across West Africa using Sentinel-2 (optical) and Sentinel-1 (radar) imagery. The machine learning ensemble integrates three commonly used machine learning methods in land cover and land use mapping, including Random Forest (RF), Gradient Boosting Machine (GBM), and Neural Network (NN). The cloud-based big geospatial data processing platform Google Earth Engine (GEE) was used for pre-processing Sentinel-2 and Sentinel-1 data. Extensive validation has demonstrated that the machine learning ensemble can generate mangrove extent maps at high accuracies for all study regions in West Africa (92%–99% Producer’s Accuracy, 98%–100% User’s Accuracy, 95%–99% Overall Accuracy). This is the first-time that mangrove extent has been mapped at a 20 m spatial resolution across West Africa. The machine learning ensemble has the potential to be applied to other regions of the world and is therefore capable of producing high-resolution mangrove extent maps at global scales periodically.


2021 ◽  
Vol 13 (14) ◽  
pp. 2658
Author(s):  
Shahab Jozdani ◽  
Dongmei Chen ◽  
Wenjun Chen ◽  
Sylvain G. Leblanc ◽  
Christian Prévost ◽  
...  

Lichen is an important food source for caribou in Canada. Lichen mapping using remote sensing (RS) images could be a challenging task, however, as lichens generally appear in unevenly distributed, small patches, and could resemble surficial features. Moreover, collecting lichen labeled data (reference data) is expensive, which restricts the application of many robust supervised classification models that generally demand a large quantity of labeled data. The goal of this study was to investigate the potential of using a very-high-spatial resolution (1-cm) lichen map of a small sample site (e.g., generated based on a single UAV scene and using field data) to train a subsequent classifier to map caribou lichen over a much larger area (~0.04 km2 vs. ~195 km2) and a lower spatial resolution image (in this case, a 50-cm WorldView-2 image). The limited labeled data from the sample site were also partially noisy due to spatial and temporal mismatching issues. For this, we deployed a recently proposed Teacher-Student semi-supervised learning (SSL) approach (based on U-Net and U-Net++ networks) involving unlabeled data to assist with improving the model performance. Our experiments showed that it was possible to scale-up the UAV-derived lichen map to the WorldView-2 scale with reasonable accuracy (overall accuracy of 85.28% and F1-socre of 84.38%) without collecting any samples directly in the WorldView-2 scene. We also found that our noisy labels were partially beneficial to the SSL robustness because they improved the false positive rate compared to the use of a cleaner training set directly collected within the same area in the WorldView-2 image. As a result, this research opens new insights into how current very high-resolution, small-scale caribou lichen maps can be used for generating more accurate large-scale caribou lichen maps from high-resolution satellite imagery.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 2157
Author(s):  
Matthew B. Wall ◽  
David Birch ◽  
May Y. Yong

Neuroimaging experiments can generate impressive volumes of data and many images of the results. This is particularly true of multi-modal imaging studies that use more than one imaging technique, or when imaging is combined with other assessments. A challenge for these studies is appropriate visualisation of results in order to drive insights and guide accurate interpretations. Next-generation visualisation technology therefore has much to offer the neuroimaging community. One example is the Imperial College London Data Observatory; a high-resolution (132 megapixel) arrangement of 64 monitors, arranged in a 313 degree arc, with a 6 metre diameter, powered by 32 rendering nodes. This system has the potential for high-resolution, large-scale display of disparate data types in a space designed to promote collaborative discussion by multiple researchers and/or clinicians. Opportunities for the use of the Data Observatory are discussed, with particular reference to applications in Multiple Sclerosis (MS) research and clinical practice. Technical issues and current work designed to optimise the use of the Data Observatory for neuroimaging are also discussed, as well as possible future research that could be enabled by the use of the system in combination with eye-tracking technology.


Sign in / Sign up

Export Citation Format

Share Document