scholarly journals DeepCob: precise and high-throughput analysis of maize cob geometry using deep learning with an application in genebank phenomics

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Lydia Kienbaum ◽  
Miguel Correa Abondano ◽  
Raul Blas ◽  
Karl Schmid

Abstract Background Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNNs) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep learning methods for maize cob image segmentation and phenotyping using a large image dataset of native maize landrace diversity from Peru. Results Comparison of three image analysis methods showed that a Mask R-CNN trained on a diverse set of maize cob images was highly superior to classical image analysis using the Felzenszwalb-Huttenlocher algorithm and a Window-based CNN due to its robustness to image quality and object segmentation accuracy ($$r=0.99$$ r = 0.99 ). We integrated Mask R-CNN into a high-throughput pipeline to segment both maize cobs and rulers in images and perform an automated quantitative analysis of eight phenotypic traits, including diameter, length, ellipticity, asymmetry, aspect ratio and average values of red, green and blue color channels for cob color. Statistical analysis identified key training parameters for efficient iterative model updating. We also show that a small number of 10–20 images is sufficient to update the initial Mask R-CNN model to process new types of cob images. To demonstrate an application of the pipeline we analyzed phenotypic variation in 19,867 maize cobs extracted from 3449 images of 2484 accessions from the maize genebank of Peru to identify phenotypically homogeneous and heterogeneous genebank accessions using multivariate clustering. Conclusions Single Mask R-CNN model and associated analysis pipeline are widely applicable tools for maize cob phenotyping in contexts like genebank phenomics or plant breeding.

2021 ◽  
Author(s):  
Lydia Kienbaum ◽  
Miguel Correa Abondano ◽  
Raul H. Blas Sevillano ◽  
Karl J Schmid

Background: Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNN) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep learning methods for maize cob image segmentation and phenotyping using a large image dataset of native maize landrace diversity from Peru. Results: Comparison of three image analysis methods showed that a Mask R-CNN trained on a diverse set of maize cob images was highly superior to classical image analysis using the Felzenszwalb-Huttenlocher algorithm and a Window-based CNN due to its robustness to image quality and object segmentation accuracy (r=0.99). We integrated Mask R-CNN into a high-throughput pipeline to segment both maize cobs and rulers in images and perform an automated quantitative analysis of eight phenotypic traits, including diameter, length, ellipticity, asymmetry, aspect ratio and average RGB values for cob color. Statistical analysis identified key training parameters for efficient iterative model updating. We also show that a small number of 10-20 images is sufficient to update the initial Mask R-CNN model to process new types of cob images. To demonstrate an application of the pipeline we analyzed phenotypic variation in 19,867 maize cobs extracted from 3,449 images of 2,484 accessions from the maize genebank of Peru to identify phenotypically homogeneous and heterogeneous genebank accessions using multivariate clustering. Conclusions: Single Mask R-CNN model and associated analysis pipeline are widely applicable tools for maize cob phenotyping in contexts like genebank phenomics or plant breeding.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


2021 ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background : Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results : On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-PAS ( Maize Phenotyping Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: I. Projection, II. Color Analysis, III. Internode length, IV. Height, V. Stem Diameter and VI. Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion : The Maize-PAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science. Keywords : Maize phenotyping; Instance segmentation; Computer vision; Deep learning; Convolutional neural network


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Augusto Souza ◽  
Yang Yang

Plant segmentation and trait extraction for individual organs are two of the key challenges in high-throughput phenotyping (HTP) operations. To address this challenge, the Ag Alumni Seed Phenotyping Facility (AAPF) at Purdue University utilizes chlorophyll fluorescence images (CFIs) to enable consistent and efficient automatic segmentation of plants of different species, age, or color. A series of image analysis routines were also developed to facilitate the quantitative measurements of key corn plant traits. A proof-of-concept experiment was conducted to demonstrate the utility of the extracted traits in assessing drought stress reaction of corn plants. The image analysis routines successfully measured several corn morphological characteristics for different sizes such as plant height, area, top-node height and diameter, number of leaves, leaf area, and angle in relation to the stem. Data from the proof-of-concept experiment showed how corn plants behaved when treated with different water regiments or grown in pot of different sizes. High-throughput image segmentation and analysis basing on a plant’s fluorescence image was proved to be efficient and reliable. Extracted trait on the segmented stem and leaves of a corn plant demonstrated the importance and utility of this kind of trait data in evaluating the performance of corn plant under stress. Data collected from corn plants grown in pots of different volumes showed the importance of using pot of standard size when conducting and reporting plant phenotyping data in a controlled-environment facility.


2021 ◽  
Author(s):  
Beatriz García Santa Cruz ◽  
Jan Sölter ◽  
Gemma Gomez Giro ◽  
Claudia Saraiva ◽  
Sonia Sabaté-Soler ◽  
...  

Abstract The study of complex diseases relies on large amounts of data to build models toward precision medicine. Such data acquisition is feasible in the context of high-throughput screening, in which the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manually generating ground truth labels for model training hampers the day-to-day application in experimental laboratories. Alternatively, traditional computer vision-based solutions do not need expensive labels for their implementation. Our work combines both approaches by training a deep learning network using weak training labels automatically generated with conventional computer vision methods. Our network surpasses the conventional segmentation quality by generalising beyond noisy labels, providing a 25 % increase of mean intersection over union, and simultaneously reducing the development and inference times. Our solution was embedded into an easy-to-use graphical user interface that allows researchers to assess the predictions and correct potential inaccuracies with minimal human input. To demonstrate the feasibility of training a deep learning solution on a large dataset of noisy labels automatically generated by a conventional pipeline, we compared our solution against the common approach of training a model from a small manually curated dataset by several experts. Our work suggests that humans perform better in context interpretation, such as error assessment, while computers outperform in pixel-by-pixel fine segmentation. Such pipelines are illustrated with a case study on image segmentation for autophagy events. This work aims for better translation of new technologies to real-world settings in microscopy-image analysis.


2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


2020 ◽  
Author(s):  
AmirAbbas Davari ◽  
Thorsten Seehaus ◽  
Matthias Braun ◽  
Andreas Maier

<p>Glacier and ice sheets are currently contributing 2/3 of the observed global sea level rise of about 3.2 mm a<sup>-1</sup>. Many of these glaciated regions (Antarctica, sub-Antarctic islands, Greenland, Russian and Canadian Arctic, Alaska, Patagonia), often with ocean calving ice front. Many glaciers on those regions show already considerable ice mass loss, with an observed acceleration in the last decade [1]. Most of this mass loss is caused by dynamic adjustment of glaciers, with considerable glacier retreat and elevation change being the major observables. The continuous and precise extraction of glacier calving fronts is hence of paramount importance for monitoring the rapid glacier changes. Detection and monitoring the ice shelves and glacier fronts from optical and Synthetic Aperture Radar (SAR) satellite images needs well-identified spectral and physical properties of glacier characteristics.</p><p>Earth Observation (EO) is producing massive amounts of data that are currently often processed either by the expensive and slow manual digitization or with simple unreliable methods such as heuristically found rule-based systems. As it was mentioned earlier, due to the variable occurrence of sea ice, icebergs and the similarity of fronts to crevasses, exact mapping of the glacier front position poses considerable difficulties to existing algorithms. Deep learning techniques are successfully applied in many tasks in image analysis [2]. Recently, Zhang et al. [3] adopted the state-of-the-art deep learning-based image segmentation method, i.e., U-net [4], on TerraSAR-X images for glacier front segmentation. The main motivation in using SAR modality instead of the optical aerial imagery is the capability of the SAR waves to penetrate cloud cover and its all year acquisition.</p><p>We intend to bridge the gap for a fully automatic and end-to-end deep learning-based glacier front detection using time series SAR imagery. U-net has performed extremely well in image segmentation, specifically in medical image processing community [5]. However, it is a large and complex model and is rather slow to train. Fully Convolutional Network (FCN) [6] can be considered as architecturally less complex variant of U-net, which has faster training and inference time. In this work, we will investigate the suitability of FCN for the glacier front segmentation and compare their performance with U-net. Our preliminary results on segmenting the glaciers demonstrate the dice coefficient of 92.96% by FCN and 93.20% by U-net, which essentially indicate the suitability of FCN for this task and its comparable performance to U-net.</p><p> </p><p><strong>References:</strong></p><p>[1] Vaughan et al. "Observations: cryosphere." Climate change 2103 (2013): 317-382.</p><p>[2] LeCun et al. "Deep learning." nature 521, no. 7553 (2015): 436.</p><p>[3] Zhang et al. "Automatically delineating the calving front of Jakobshavn Isbræ from multitemporal TerraSAR-X images: a deep learning approach." The Cryosphere 13, no. 6 (2019): 1729-1741.</p><p>[4] Ronneberger et al. "U-net: Convolutional networks for biomedical image segmentation." MICCAI 2015.</p><p>[5] Vesal et al. "A multi-task framework for skin lesion detection and segmentation." In OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, 2018.</p><p>[6] Long et al. "Fully convolutional networks for semantic segmentation." CVPR 2015.</p>


2011 ◽  
Vol 12 (1) ◽  
pp. 148 ◽  
Author(s):  
Anja Hartmann ◽  
Tobias Czauderna ◽  
Roberto Hoffmann ◽  
Nils Stein ◽  
Falk Schreiber

Sign in / Sign up

Export Citation Format

Share Document