Automated Vigor Estimation on Vineyards

2021 ◽  
Vol 13 (2) ◽  
pp. 15
Author(s):  
Maria Pantoja

Estimating the balance or vigor in vines, as the yield to pruning weight relation, is a useful parameter that growers use to better prepare for the harvest season and to establish precision agriculture management of the vineyard, achieving specific site planification like pruning, debriefing or budding. Traditionally growers obtain this parameter by first manually weighting the pruned canes during the vineyard dormant season (no leaves); second during the harvest collect the weight of the fruit for the vines evaluated in the first step and then correlate the two measures. Since this is a very manual and time-consuming task, growers usually obtain this number by just taking a couple of samples and extrapolating this value to the entire vineyard, losing all the variability present in theirs fields, which imply loss in information that can lead to specific site management and consequently grape quality and quantity improvement. In this paper we develop a computer vision-based algorithm that is robust to differences in trellis system, varieties and light conditions; to automatically estimate the pruning weight and consequently the variability of vigor inside the lot. The results will be used to improve the way local growers plan the annual winter pruning, advancing in the transformation to precision agriculture. Our proposed solution doesn\textsc{\char13}t require to weight the shoots (also called canes), creating prescription maps (detail instructions for pruning, harvest and other management decisions specific for the location) based in the estimated vigor automatically. Our solution uses Deep Learning (DL) techniques to get the segmentation of the vine trees directly from the image captured on the field during dormant season

EDIS ◽  
2021 ◽  
Vol 2021 (5) ◽  
Author(s):  
Amr Abd-Elrahman ◽  
Katie Britt ◽  
Vance Whitaker

This publication presents a guide to image analysis for researchers and farm managers who use ArcGIS software. Anyone with basic geographic information system analysis skills may follow along with the demonstration and learn to implement the Mask Region Convolutional Neural Networks model, a widely used model for object detection, to delineate strawberry canopies using ArcGIS Pro Image Analyst Extension in a simple workflow. This process is useful for precision agriculture management.


2019 ◽  
Author(s):  
Alan Bauer ◽  
Aaron George Bostrom ◽  
Joshua Ball ◽  
Christopher Applegate ◽  
Tao Cheng ◽  
...  

AbstractAerial imagery is regularly used by farmers and growers to monitor crops during the growing season. To extract meaningful phenotypic information from large-scale aerial images collected regularly from the field, high-throughput analytic solutions are required, which not only produce high-quality measures of key crop traits, but also support agricultural practitioners to make reliable management decisions of their crops. Here, we report AirSurf-Lettuce, an automated and open-source aerial image analysis platform that combines modern computer vision, up-to-date machine learning, and modular software engineering to measure yield-related phenotypes of millions of lettuces across the field. Utilising ultra-large normalized difference vegetation index (NDVI) images acquired by fixed-wing light aircrafts together with a deep-learning classifier trained with over 100,000 labelled lettuce signals, the platform is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution in the field, based on which global positioning system (GPS) tagged harvest regions can be derived to enable growers and farmers’ precise harvest strategies and marketability estimates before the harvest.


2021 ◽  
Vol 14 (1) ◽  
pp. 416
Author(s):  
Mostofa Ahsan ◽  
Sulaymon Eshkabilov ◽  
Bilal Cemek ◽  
Erdem Küçüktopcu ◽  
Chiwon W. Lee ◽  
...  

Deep learning (DL) and computer vision applications in precision agriculture have great potential to identify and classify plant and vegetation species. This study presents the applicability of DL modeling with computer vision techniques to analyze the nutrient levels of hydroponically grown four lettuce cultivars (Lactuca sativa L.), namely Black Seed, Flandria, Rex, and Tacitus. Four different nutrient concentrations (0, 50, 200, 300 ppm nitrogen solutions) were prepared and utilized to grow these lettuce cultivars in the greenhouse. RGB images of lettuce leaves were captured. The results showed that the developed DL’s visual geometry group 16 (VGG16) and VGG19 architectures identified the nutrient levels of lettuces with 87.5 to 100% accuracy for four lettuce cultivars, respectively. Convolution neural network models were also implemented to identify the nutrient levels of the studied lettuces for comparison purposes. The developed modeling techniques can be applied not only to collect real-time nutrient data from other lettuce type cultivars grown in greenhouses but also in fields. Moreover, these modeling approaches can be applied for remote sensing purposes to various lettuce crops. To the best knowledge of the authors, this is a novel study applying the DL technique to determine the nutrient concentrations in lettuce cultivars.


2019 ◽  
Vol 35 (6) ◽  
pp. 1009-1014 ◽  
Author(s):  
Gensheng Hu ◽  
Lidong Qian ◽  
Dong Liang ◽  
Mingzhu Wan

Abstract. Phenotypic monitoring provides important data support for precision agriculture management. This study proposes a deep learning-based method to gain an accurate count of wheat ears and spikelets. The deep learning networks incorporate self-adversarial training and attention mechanism with stacked hourglass networks. Four stacked hourglass networks follow a holistic attention map to construct a generator of self-adversarial networks. The holistic attention maps enable the networks to focus on the overall consistency of the whole wheat. The discriminator of self-adversarial networks displays the same structure as the generator, which causes adversarial loss to the generator. This process improves the generator’s learning ability and prediction accuracy for occluded wheat ears. This method yields higher wheat ear count in the Annotated Crop Image Database (ACID) data set than the previous state-of-the-art algorithm. Keywords: Attention mechanism, Plant phenotype, Self-adversarial networks, Stacked hourglass.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Alan Bauer ◽  
Aaron George Bostrom ◽  
Joshua Ball ◽  
Christopher Applegate ◽  
Tao Cheng ◽  
...  

2021 ◽  
Vol 9 (01) ◽  
pp. 691-698
Author(s):  
Prakanshu Srivastava ◽  
◽  
Kritika Mishra ◽  
Vibhav Awasthi ◽  
Vivek Kumar Sahu ◽  
...  

When plants and crops are suffering from pests it affects the agricultural production of the country. Usually, farmers or experts observe the plants with eye for detection and identification of disease. But this method is often time processing, expensive and inaccurate. Automatic detection using image processing techniques provide fast and accurate results. This paper cares with a replacement approach to the development of disease recognition model, supported leaf image classification, by the utilization of deep convolutional networks. Advances in computer vision present a chance to expand and enhance the practice of precise plant protection and extend the market of computer vision applications within the field of precision agriculture. a completely unique way of training and therefore the methodology used facilitate a fast and straightforward system implementation in practice. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images to make a database, assessed by agricultural experts, a deep learning framework to perform the deep CNN training. This method paper may be a new approach in detecting plant diseases using the deep convolutional neural network trained and finetuned to suit accurately to the database of a plants leaves that was gathered independently for diverse plant diseases. The advance and novelty of the developed model dwell its simplicity healthy leaves and background images are in line with other classes, enabling the model to distinguish between diseased leaves and healthy ones or from the environment by using CNN. Plants are the source of food on earth. Infections and diseases in plants are therefore a big threat, while the foremost common diagnosis is primarily performed by examining the plant body for the presence of visual symptoms [1]. As an alternative to the traditionally time-consuming process, different research works plan to find feasible approaches towards protecting plants. In recent years, growth in technology has engendered several alternatives to traditional arduous methods [2]. Deep learning techniques are very successful in image classification problems.


2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Sign in / Sign up

Export Citation Format

Share Document