Automated grapevine flower detection and quantification method based on computer vision and deep learning from on-the-go imaging using a mobile sensing platform under field conditions

2020 ◽  
Vol 178 ◽  
pp. 105796
Author(s):  
Fernando Palacios ◽  
Gloria Bueno ◽  
Jesús Salido ◽  
Maria P. Diago ◽  
Inés Hernández ◽  
...  
Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3799 ◽  
Author(s):  
Palacios ◽  
Diago ◽  
Tardaguila

Grapevine cluster compactness affects grape composition, fungal disease incidence, and wine quality. Thus far, cluster compactness assessment has been based on visual inspection performed by trained evaluators with very scarce application in the wine industry. The goal of this work was to develop a new, non-invasive method based on the combination of computer vision and machine learning technology for cluster compactness assessment under field conditions from on-the-go red, green, blue (RGB) image acquisition. A mobile sensing platform was used to automatically capture RGB images of grapevine canopies and fruiting zones at night using artificial illumination. Likewise, a set of 195 clusters of four red grapevine varieties of three commercial vineyards were photographed during several years one week prior to harvest. After image acquisition, cluster compactness was evaluated by a group of 15 experts in the laboratory following the International Organization of Vine and Wine (OIV) 204 standard as a reference method. The developed algorithm comprises several steps, including an initial, semi-supervised image segmentation, followed by automated cluster detection and automated compactness estimation using a Gaussian process regression model. Calibration (95 clusters were used as a training set and 100 clusters as the test set) and leave-one-out cross-validation models (LOOCV; performed on the whole 195 clusters set) were elaborated. For these, determination coefficient (R2) of 0.68 and a root mean squared error (RMSE) of 0.96 were obtained on the test set between the image-based compactness estimated values and the average of the evaluators’ ratings (in the range from 1–9). Additionally, the leave-one-out cross-validation yielded a R2 of 0.70 and an RMSE of 1.11. The results show that the newly developed computer vision based method could be commercially applied by the wine industry for efficient cluster compactness estimation from RGB on-the-go image acquisition platforms in commercial vineyards.


2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Author(s):  
Romain Thevenoux ◽  
Van Linh LE ◽  
Heloïse Villessèche ◽  
Alain Buisson ◽  
Marie Beurton-Aimar ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document