scholarly journals Combining computer vision and deep learning to classify varieties of Prunus dulcis for the nursery plant industry

2022 ◽  
Author(s):  
Sergio Borraz‐Martínez ◽  
Joan Simó ◽  
Anna Gras ◽  
Mariàngela Mestre ◽  
Ricard Boqué ◽  
...  
2020 ◽  
Author(s):  
Sergio Borraz‐Martínez ◽  
Francesc Tarrés ◽  
Ricard Boqué ◽  
Mariàngela Mestre ◽  
Joan Simó ◽  
...  

2020 ◽  
Author(s):  
Sergio Borraz‐Martínez ◽  
Francesc Tarrés ◽  
Ricard Boqué ◽  
Mariàngela Mestre ◽  
Joan Simó ◽  
...  

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Sergio Borraz-Martínez ◽  
Joan Simó ◽  
Anna Gras ◽  
Mariàngela Mestre ◽  
Ricard Boqué

AbstractThe emergence of new almond tree (Prunus dulcis) varieties with agricultural interest is forcing the nursery plant industry to establish quality systems to keep varietal purity in the production stage. The aim of this study is to assess the capability of near-infrared spectroscopy (NIRS) to classify different Prunus dulcis varieties as an alternative to more expensive methods. Fresh and dried-powdered leaves of six different varieties of almond trees of commercial interest (Avijor, Guara, Isabelona, Marta, Pentacebas and Soleta) were used. The most important variables to discriminate between these varieties were studied through of three scientifically accepted indicators (Variable importance in projection¸ selectivity ratio and vector of the regression coefficients). The results showed that the 7000 to 4000 cm−1 range contains the most useful variables, which allowed to decrease the complexity of the data set. Concerning to the classification models, a high percentage of correct classifications (90–100%) was obtained, where dried-powdered leaves showed better results than fresh leaves. However, the classification rate of both kinds of leaves evidences the capacity of the near-infrared spectroscopy to discriminate Prunus dulcis varieties. We demonstrate with these results the capability of the NIRS technology as a quality control tool in nursery plant industry.


2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Sign in / Sign up

Export Citation Format

Share Document