Sashimi : A toolkit for facilitating high‐throughput organismal image segmentation using deep learning

Author(s):  
Shawn T. Schwartz ◽  
Michael E. Alfaro
2021 ◽  
Author(s):  
Lydia Kienbaum ◽  
Miguel Correa Abondano ◽  
Raul H. Blas Sevillano ◽  
Karl J Schmid

Background: Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNN) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep learning methods for maize cob image segmentation and phenotyping using a large image dataset of native maize landrace diversity from Peru. Results: Comparison of three image analysis methods showed that a Mask R-CNN trained on a diverse set of maize cob images was highly superior to classical image analysis using the Felzenszwalb-Huttenlocher algorithm and a Window-based CNN due to its robustness to image quality and object segmentation accuracy (r=0.99). We integrated Mask R-CNN into a high-throughput pipeline to segment both maize cobs and rulers in images and perform an automated quantitative analysis of eight phenotypic traits, including diameter, length, ellipticity, asymmetry, aspect ratio and average RGB values for cob color. Statistical analysis identified key training parameters for efficient iterative model updating. We also show that a small number of 10-20 images is sufficient to update the initial Mask R-CNN model to process new types of cob images. To demonstrate an application of the pipeline we analyzed phenotypic variation in 19,867 maize cobs extracted from 3,449 images of 2,484 accessions from the maize genebank of Peru to identify phenotypically homogeneous and heterogeneous genebank accessions using multivariate clustering. Conclusions: Single Mask R-CNN model and associated analysis pipeline are widely applicable tools for maize cob phenotyping in contexts like genebank phenomics or plant breeding.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yixin Guo ◽  
Shuai Li ◽  
Zhanguo Zhang ◽  
Yang Li ◽  
Zhenbang Hu ◽  
...  

The rice seed setting rate (RSSR) is an important component in calculating rice yields and a key phenotype for its genetic analysis. Automatic calculations of RSSR through computer vision technology have great significance for rice yield predictions. The basic premise for calculating RSSR is having an accurate and high throughput identification of rice grains. In this study, we propose a method based on image segmentation and deep learning to automatically identify rice grains and calculate RSSR. By collecting information on the rice panicle, our proposed image automatic segmentation method can detect the full grain and empty grain, after which the RSSR can be calculated by our proposed rice seed setting rate optimization algorithm (RSSROA). Finally, the proposed method was used to predict the RSSR during which process, the average identification accuracy reached 99.43%. This method has therefore been proven as an effective, non-invasive method for high throughput identification and calculation of RSSR. It is also applicable to soybean yields, as well as wheat and other crops with similar characteristics.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Lydia Kienbaum ◽  
Miguel Correa Abondano ◽  
Raul Blas ◽  
Karl Schmid

Abstract Background Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNNs) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep learning methods for maize cob image segmentation and phenotyping using a large image dataset of native maize landrace diversity from Peru. Results Comparison of three image analysis methods showed that a Mask R-CNN trained on a diverse set of maize cob images was highly superior to classical image analysis using the Felzenszwalb-Huttenlocher algorithm and a Window-based CNN due to its robustness to image quality and object segmentation accuracy ($$r=0.99$$ r = 0.99 ). We integrated Mask R-CNN into a high-throughput pipeline to segment both maize cobs and rulers in images and perform an automated quantitative analysis of eight phenotypic traits, including diameter, length, ellipticity, asymmetry, aspect ratio and average values of red, green and blue color channels for cob color. Statistical analysis identified key training parameters for efficient iterative model updating. We also show that a small number of 10–20 images is sufficient to update the initial Mask R-CNN model to process new types of cob images. To demonstrate an application of the pipeline we analyzed phenotypic variation in 19,867 maize cobs extracted from 3449 images of 2484 accessions from the maize genebank of Peru to identify phenotypically homogeneous and heterogeneous genebank accessions using multivariate clustering. Conclusions Single Mask R-CNN model and associated analysis pipeline are widely applicable tools for maize cob phenotyping in contexts like genebank phenomics or plant breeding.


2021 ◽  
Vol 37 (5) ◽  
pp. 929-940
Author(s):  
Liying Cao ◽  
Hongda Li ◽  
Helong Yu ◽  
Guifen Chen ◽  
Heshu Wang

HighlightsModify the U-Net segmentation network to reduce the loss of segmentation accuracy.Reducing the number of layers U-net network, modifying the loss function, and the increase in the output layer dropout.It can be well extracted after splitting blade morphological model and color feature.Abstract. From the perspective of computer vision, the shortcut to extract phenotypic information from a single crop in the field is image segmentation. Plant segmentation is affected by the background environment and illumination. Using deep learning technology to combine depth maps with multi-view images can achieve high-throughput image processing. This article proposes an improved U-Net segmentation network, based on small sample data enhancement, and reconstructs the U-Net model by optimizing the model framework, activation function and loss function. It is used to realize automatic segmentation of plant leaf images and extract relevant feature parameters. Experimental results show that the improved model can provide reliable segmentation results under different leaf sizes, different lighting conditions, different backgrounds, and different plant leaves. The pixel-by-pixel segmentation accuracy reaches 0.94. Compared with traditional methods, this network achieves robust and high-throughput image segmentation. This method is expected to provide key technical support and practical tools for top-view image processing, Unmanned Aerial Vehicle phenotype extraction, and phenotype field platforms. Keywords: Deep learning, Full convolution neural network, Image segmentation, Phenotype analysis, U-Net.


2021 ◽  
Author(s):  
Beatriz García Santa Cruz ◽  
Jan Sölter ◽  
Gemma Gomez Giro ◽  
Claudia Saraiva ◽  
Sonia Sabaté-Soler ◽  
...  

Abstract The study of complex diseases relies on large amounts of data to build models toward precision medicine. Such data acquisition is feasible in the context of high-throughput screening, in which the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manually generating ground truth labels for model training hampers the day-to-day application in experimental laboratories. Alternatively, traditional computer vision-based solutions do not need expensive labels for their implementation. Our work combines both approaches by training a deep learning network using weak training labels automatically generated with conventional computer vision methods. Our network surpasses the conventional segmentation quality by generalising beyond noisy labels, providing a 25 % increase of mean intersection over union, and simultaneously reducing the development and inference times. Our solution was embedded into an easy-to-use graphical user interface that allows researchers to assess the predictions and correct potential inaccuracies with minimal human input. To demonstrate the feasibility of training a deep learning solution on a large dataset of noisy labels automatically generated by a conventional pipeline, we compared our solution against the common approach of training a model from a small manually curated dataset by several experts. Our work suggests that humans perform better in context interpretation, such as error assessment, while computers outperform in pixel-by-pixel fine segmentation. Such pipelines are illustrated with a case study on image segmentation for autophagy events. This work aims for better translation of new technologies to real-world settings in microscopy-image analysis.


2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


2021 ◽  
Vol 118 (12) ◽  
pp. 123701
Author(s):  
Julie Martin-Wortham ◽  
Steffen M. Recktenwald ◽  
Marcelle G. M. Lopes ◽  
Lars Kaestner ◽  
Christian Wagner ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Author(s):  
Xuesheng Bian ◽  
Gang Li ◽  
Cheng Wang ◽  
Weiquan Liu ◽  
Xiuhong Lin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document