scholarly journals AirSurf-Lettuce: an aerial image analysis platform for ultra-scale field phenotyping and precision agriculture using computer vision and deep learning

2019 ◽  
Author(s):  
Alan Bauer ◽  
Aaron George Bostrom ◽  
Joshua Ball ◽  
Christopher Applegate ◽  
Tao Cheng ◽  
...  

AbstractAerial imagery is regularly used by farmers and growers to monitor crops during the growing season. To extract meaningful phenotypic information from large-scale aerial images collected regularly from the field, high-throughput analytic solutions are required, which not only produce high-quality measures of key crop traits, but also support agricultural practitioners to make reliable management decisions of their crops. Here, we report AirSurf-Lettuce, an automated and open-source aerial image analysis platform that combines modern computer vision, up-to-date machine learning, and modular software engineering to measure yield-related phenotypes of millions of lettuces across the field. Utilising ultra-large normalized difference vegetation index (NDVI) images acquired by fixed-wing light aircrafts together with a deep-learning classifier trained with over 100,000 labelled lettuce signals, the platform is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution in the field, based on which global positioning system (GPS) tagged harvest regions can be derived to enable growers and farmers’ precise harvest strategies and marketability estimates before the harvest.

2021 ◽  
Vol 13 (1) ◽  
pp. 49-57
Author(s):  
Brahim Jabir ◽  
Noureddine Falih ◽  
Asmaa Sarih ◽  
Adil Tannouche

Researchers in precision agriculture regularly use deep learning that will help growers and farmers control and monitor crops during the growing season; these tools help to extract meaningful information from large-scale aerial images received from the field using several techniques in order to create a strategic analytics for making a decision. The information result of the operation could be exploited for many reasons, such as sub-plot specific weed control. Our focus in this paper is on weed identification and control in sugar beet fields, particularly the creation and optimization of a Convolutional Neural Networks model and train it according to our data set to predict and identify the most popular weed strains in the region of Beni Mellal, Morocco. All that could help select herbicides that work on the identified weeds, we explore the way of transfer learning approach to design the networks, and the famous library Tensorflow for deep learning models, and Keras which is a high-level API built on Tensorflow.


Author(s):  
J. Liu ◽  
S. Ji ◽  
C. Zhang ◽  
Z. Qin

Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


2021 ◽  
Vol 13 (3) ◽  
pp. 364
Author(s):  
Han Gao ◽  
Jinhui Guo ◽  
Peng Guo ◽  
Xiuwan Chen

Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha).


Author(s):  
Yi-Ta Hsieh ◽  
Shou-Tsung Wu ◽  
Chaur-Tzuhn Chen ◽  
Jan-Chang Chen

The shadows in optical remote sensing images are regarded as image nuisances in numerous applications. The classification and interpretation of shadow area in a remote sensing image are a challenge, because of the reduction or total loss of spectral information in those areas. In recent years, airborne multispectral aerial image devices have been developed 12-bit or higher radiometric resolution data, including Leica ADS-40, Intergraph DMC. The increased radiometric resolution of digital imagery provides more radiometric details of potential use in classification or interpretation of land cover of shadow areas. Therefore, the objectives of this study are to analyze the spectral properties of the land cover in the shadow areas by ADS-40 high radiometric resolution aerial images, and to investigate the spectral and vegetation index differences between the various shadow and non-shadow land covers. According to research findings of spectral analysis of ADS-40 image: (i) The DN values in shadow area are much lower than in nonshadow area; (ii) DN values received from shadowed areas that will also be affected by different land cover, and it shows the possibility of land cover property retrieval as in nonshadow area; (iii) The DN values received from shadowed regions decrease in the visible band from short to long wavelengths due to scattering; (iv) The shadow area NIR of vegetation category also shows a strong reflection; (v) Generally, vegetation indexes (NDVI) still have utility to classify the vegetation and non-vegetation in shadow area. The spectral data of high radiometric resolution images (ADS-40) is potential for the extract land cover information of shadow areas.


EDIS ◽  
2021 ◽  
Vol 2021 (5) ◽  
Author(s):  
Amr Abd-Elrahman ◽  
Katie Britt ◽  
Vance Whitaker

This publication presents a guide to image analysis for researchers and farm managers who use ArcGIS software. Anyone with basic geographic information system analysis skills may follow along with the demonstration and learn to implement the Mask Region Convolutional Neural Networks model, a widely used model for object detection, to delineate strawberry canopies using ArcGIS Pro Image Analyst Extension in a simple workflow. This process is useful for precision agriculture management.


Author(s):  
L. Madhuanand ◽  
F. Nex ◽  
M. Y. Yang

Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.


Data ◽  
2018 ◽  
Vol 3 (3) ◽  
pp. 28 ◽  
Author(s):  
Kasthurirangan Gopalakrishnan

Deep learning, more specifically deep convolutional neural networks, is fast becoming a popular choice for computer vision-based automated pavement distress detection. While pavement image analysis has been extensively researched over the past three decades or so, recent ground-breaking achievements of deep learning algorithms in the areas of machine translation, speech recognition, and computer vision has sparked interest in the application of deep learning to automated detection of distresses in pavement images. This paper provides a narrative review of recently published studies in this field, highlighting the current achievements and challenges. A comparison of the deep learning software frameworks, network architecture, hyper-parameters employed by each study, and crack detection performance is provided, which is expected to provide a good foundation for driving further research on this important topic in the context of smart pavement or asset management systems. The review concludes with potential avenues for future research; especially in the application of deep learning to not only detect, but also characterize the type, extent, and severity of distresses from 2D and 3D pavement images.


Author(s):  
Gabriel da Silva Vieira ◽  
Bruno M. Rocha ◽  
Fabrizzio Soares ◽  
Junio Cesar Lima ◽  
Helio Pedrini ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document