scholarly journals Multi-feature data repository development and analytics for image cosegmentation in high-throughput plant phenotyping

PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257001
Author(s):  
Rubi Quiñones ◽  
Francisco Munoz-Arriola ◽  
Sruti Das Choudhury ◽  
Ashok Samal

Cosegmentation is a newly emerging computer vision technique used to segment an object from the background by processing multiple images at the same time. Traditional plant phenotyping analysis uses thresholding segmentation methods which result in high segmentation accuracy. Although there are proposed machine learning and deep learning algorithms for plant segmentation, predictions rely on the specific features being present in the training set. The need for a multi-featured dataset and analytics for cosegmentation becomes critical to better understand and predict plants’ responses to the environment. High-throughput phenotyping produces an abundance of data that can be leveraged to improve segmentation accuracy and plant phenotyping. This paper introduces four datasets consisting of two plant species, Buckwheat and Sunflower, each split into control and drought conditions. Each dataset has three modalities (Fluorescence, Infrared, and Visible) with 7 to 14 temporal images that are collected in a high-throughput facility at the University of Nebraska-Lincoln. The four datasets (which will be collected under the CosegPP data repository in this paper) are evaluated using three cosegmentation algorithms: Markov random fields-based, Clustering-based, and Deep learning-based cosegmentation, and one commonly used segmentation approach in plant phenotyping. The integration of CosegPP with advanced cosegmentation methods will be the latest benchmark in comparing segmentation accuracy and finding areas of improvement for cosegmentation methodology.

2017 ◽  
Author(s):  
Zhikai Liang ◽  
Piyush Pandey ◽  
Vincent Stoerger ◽  
Yuhang Xu ◽  
Yumou Qiu ◽  
...  

ABSTRACTMaize (Zea mays ssp. mays) is one of three crops, along with rice and wheat, responsible for more than 1/2 of all calories consumed around the world. Increasing the yield and stress tolerance of these crops is essential to meet the growing need for food. The cost and speed of plant phenotyping is currently the largest constraint on plant breeding efforts. Datasets linking new types of high throughput phenotyping data collected from plants to the performance of the same genotypes under agronomic conditions across a wide range of environments are essential for developing new statistical approaches and computer vision based tools. A set of maize inbreds – primarily recently off patent lines – were phenotyped using a high throughput platform at University of Nebraska-Lincoln. These lines have been previously subjected to high density genotyping, and scored for a core set of 13 phenotypes in field trials across 13 North American states in two years by the Genomes to Fields consortium. A total of 485 GB of image data including RGB, hyperspectral, fluorescence and thermal infrared photos has been released. Correlations between image-based measurements and manual measurements demonstrated the feasibility of quantifying variation in plant architecture using image data. However, naive approaches to measuring traits such as biomass can introduce nonrandom measurement errors confounded with genotype variation. Analysis of hyperspectral image data demonstrated unique signatures from stem tissue. Integrating heritable phenotypes from high-throughput phenotyping data with field data from different environments can reveal previously unknown factors influencing yield plasticity.


2019 ◽  
Author(s):  
Xu Wang ◽  
Hong Xuan ◽  
Byron Evers ◽  
Sandesh Shrestha ◽  
Robert Pless ◽  
...  

ABSTRACTBackgroundPrecise measurement of plant traits with precision and speed on large populations has emerged as a critical bottleneck in connecting genotype to phenotype in genetics and breeding. This bottleneck limits advancements in understanding plant genomes and the development of improved, high-yielding crop varieties.ResultsHere we demonstrate the application of deep learning on proximal imaging from a mobile field vehicle to directly score plant morphology and developmental stages in wheat under field conditions. We developed and trained a convolutional neural network with image datasets labeled from expert visual scores and used this ‘breeder-trained’ network to directly score wheat morphology and developmental stages. For both morphological (awned) and phenological (flowering time) traits, we demonstrate high heritability and extremely high accuracy against the ‘ground-truth’ values from visual scoring. Using the traits scored by the network, we tested genotype-to-phenotype association using the deep learning phenotypes and uncovered novel epistatic interactions for flowering time. Enabled by the time-series high-throughput phenotyping, we describe a new phenotype as the rate of flowering and show heritable genetic control.ConclusionsWe demonstrated a field-based high-throughput phenotyping approach using deep learning that can directly score morphological and developmental phenotypes in genetic populations. Most powerfully, the deep learning approach presented here gives a conceptual advancement in high-throughput plant phenotyping as it can potentially score any trait in any plant species through leveraging expert knowledge from breeders, geneticist, pathologists and physiologists.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


2020 ◽  
Author(s):  
Nicolás Gaggion ◽  
Federico Ariel ◽  
Vladimir Daric ◽  
Éric Lambert ◽  
Simon Legendre ◽  
...  

ABSTRACTDeep learning methods have outperformed previous techniques in most computer vision tasks, including image-based plant phenotyping. However, massive data collection of root traits and the development of associated artificial intelligence approaches have been hampered by the inaccessibility of the rhizosphere. Here we present ChronoRoot, a system which combines 3D printed open-hardware with deep segmentation networks for high temporal resolution phenotyping of plant roots in agarized medium. We developed a novel deep learning based root extraction method which leverages the latest advances in convolutional neural networks for image segmentation, and incorporates temporal consistency into the root system architecture reconstruction process. Automatic extraction of phenotypic parameters from sequences of images allowed a comprehensive characterization of the root system growth dynamics. Furthermore, novel time-associated parameters emerged from the analysis of spectral features derived from temporal signals. Altogether, our work shows that the combination of machine intelligence methods and a 3D-printed device expands the possibilities of root high-throughput phenotyping for genetics and natural variation studies as well as the screening of clock-related mutants, revealing novel root traits.


2016 ◽  
Author(s):  
Dijun Chen ◽  
Rongli Shi ◽  
Jean-Michel Pape ◽  
Christian Klukas

AbstractImage-based high-throughput phenotyping technologies have been rapidly developed in plant science recently and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologist. However, it is a great challenge to find a suitable model to predict plant biomass in the context of high-throughput phenotyping. In the present study, we constructed several models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to three consecutive barley experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model, in particular the cross-experiment performance, is promising to relieve the phenotyping bottleneck in biomass measurement in breeding applications. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of plant biomass outcome. What’s more, the methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass.One-sentence SummaryWe demonstrated that plant biomass can be accurately predicted from image-based parameters in the context of high-throughput phenotyping.FootnotesThis work was supported by the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), the Robert Bosch Stiftung (32.5.8003.0116.0) and the Federal Agency for Agriculture and Food (BEL, 15/12-13, 530-06.01-BiKo CHN) and the Federal Ministry of Education and Research (BMBF, 0315958A and 031A053B). This research was furthermore enabled with support of the European Plant Phenotyping Network (EPPN, grant agreement no. 284443) funded by the FP7 Research Infrastructures Programme of the European Union.


Author(s):  
Daoliang Li ◽  
Chaoqun Quan ◽  
Zhaoyang Song ◽  
Xiang Li ◽  
Guanghui Yu ◽  
...  

Food scarcity, population growth, and global climate change have propelled crop yield growth driven by high-throughput phenotyping into the era of big data. However, access to large-scale phenotypic data has now become a critical barrier that phenomics urgently must overcome. Fortunately, the high-throughput plant phenotyping platform (HT3P), employing advanced sensors and data collection systems, can take full advantage of non-destructive and high-throughput methods to monitor, quantify, and evaluate specific phenotypes for large-scale agricultural experiments, and it can effectively perform phenotypic tasks that traditional phenotyping could not do. In this way, HT3Ps are novel and powerful tools, for which various commercial, customized, and even self-developed ones have been recently introduced in rising numbers. Here, we review these HT3Ps in nearly 7 years from greenhouses and growth chambers to the field, and from ground-based proximal phenotyping to aerial large-scale remote sensing. Platform configurations, novelties, operating modes, current developments, as well the strengths and weaknesses of diverse types of HT3Ps are thoroughly and clearly described. Then, miscellaneous combinations of HT3Ps for comparative validation and comprehensive analysis are systematically present, for the first time. Finally, we consider current phenotypic challenges and provide fresh perspectives on future development trends of HT3Ps. This review aims to provide ideas, thoughts, and insights for the optimal selection, exploitation, and utilization of HT3Ps, and thereby pave the way to break through current phenotyping bottlenecks in botany.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3319
Author(s):  
Stuart A. Bagley ◽  
Jonathan A. Atkinson ◽  
Henry Hunt ◽  
Michael H. Wilson ◽  
Tony P. Pridmore ◽  
...  

High-throughput plant phenotyping in controlled environments (growth chambers and glasshouses) is often delivered via large, expensive installations, leading to limited access and the increased relevance of “affordable phenotyping” solutions. We present two robot vectors for automated plant phenotyping under controlled conditions. Using 3D-printed components and readily-available hardware and electronic components, these designs are inexpensive, flexible and easily modified to multiple tasks. We present a design for a thermal imaging robot for high-precision time-lapse imaging of canopies and a Plate Imager for high-throughput phenotyping of roots and shoots of plants grown on media plates. Phenotyping in controlled conditions requires multi-position spatial and temporal monitoring of environmental conditions. We also present a low-cost sensor platform for environmental monitoring based on inexpensive sensors, microcontrollers and internet-of-things (IoT) protocols.


2021 ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background : Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results : On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-PAS ( Maize Phenotyping Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: I. Projection, II. Color Analysis, III. Internode length, IV. Height, V. Stem Diameter and VI. Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion : The Maize-PAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science. Keywords : Maize phenotyping; Instance segmentation; Computer vision; Deep learning; Convolutional neural network


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e5727 ◽  
Author(s):  
Jeffrey C. Berry ◽  
Noah Fahlgren ◽  
Alexandria A. Pokorny ◽  
Rebecca S. Bart ◽  
Kira M. Veley

High-throughput phenotyping has emerged as a powerful method for studying plant biology. Large image-based datasets are generated and analyzed with automated image analysis pipelines. A major challenge associated with these analyses is variation in image quality that can inadvertently bias results. Images are made up of tuples of data called pixels, which consist of R, G, and B values, arranged in a grid. Many factors, for example image brightness, can influence the quality of the image that is captured. These factors alter the values of the pixels within images and consequently can bias the data and downstream analyses. Here, we provide an automated method to adjust an image-based dataset so that brightness, contrast, and color profile is standardized. The correction method is a collection of linear models that adjusts pixel tuples based on a reference panel of colors. We apply this technique to a set of images taken in a high-throughput imaging facility and successfully detect variance within the image dataset. In this case, variation resulted from temperature-dependent light intensity throughout the experiment. Using this correction method, we were able to standardize images throughout the dataset, and we show that this correction enhanced our ability to accurately quantify morphological measurements within each image. We implement this technique in a high-throughput pipeline available with this paper, and it is also implemented in PlantCV.


Sign in / Sign up

Export Citation Format

Share Document