scholarly journals Automatic Traits Extraction and Fitting for Field High-throughput Phenotyping Systems

2020 ◽  
Author(s):  
Xingche Guo ◽  
Yumou Qiu ◽  
Dan Nettleton ◽  
Cheng-Ting Yeh ◽  
Zihao Zheng ◽  
...  

ABSTRACTHigh-throughput phenotyping is a modern technology to measure plant traits efficiently and in large scale by imaging systems over the whole growth season. Those images provide rich data for statistical analysis of plant phenotypes. We propose a pipeline to extract and analyze the plant traits for field phenotyping systems. The proposed pipeline include the following main steps: plant segmentation from field images, automatic calculation of plant traits from the segmented images, and functional curve fitting for the extracted traits. To deal with the challenging problem of plant segmentation for field images, we propose a novel approach on image pixel classification by transform domain neural network models, which utilizes plant pixels from greenhouse images to train a segmentation model for field images. Our results show the proposed procedure is able to accurately extract plant heights and is more stable than results from Amazon Turks, who manually measure plant heights from original images.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xingche Guo ◽  
Yumou Qiu ◽  
Dan Nettleton ◽  
Cheng-Ting Yeh ◽  
Zihao Zheng ◽  
...  

High-throughput phenotyping enables the efficient collection of plant trait data at scale. One example involves using imaging systems over key phases of a crop growing season. Although the resulting images provide rich data for statistical analyses of plant phenotypes, image processing for trait extraction is required as a prerequisite. Current methods for trait extraction are mainly based on supervised learning with human labeled data or semisupervised learning with a mixture of human labeled data and unsupervised data. Unfortunately, preparing a sufficiently large training data is both time and labor-intensive. We describe a self-supervised pipeline (KAT4IA) that uses K-means clustering on greenhouse images to construct training data for extracting and analyzing plant traits from an image-based field phenotyping system. The KAT4IA pipeline includes these main steps: self-supervised training set construction, plant segmentation from images of field-grown plants, automatic separation of target plants, calculation of plant traits, and functional curve fitting of the extracted traits. To deal with the challenge of separating target plants from noisy backgrounds in field images, we describe a novel approach using row-cuts and column-cuts on images segmented by transform domain neural network learning, which utilizes plant pixels identified from greenhouse images to train a segmentation model for field images. This approach is efficient and does not require human intervention. Our results show that KAT4IA is able to accurately extract plant pixels and estimate plant heights.


2021 ◽  
Vol 22 (15) ◽  
pp. 8266
Author(s):  
Minsu Kim ◽  
Chaewon Lee ◽  
Subin Hong ◽  
Song Lim Kim ◽  
Jeong-Ho Baek ◽  
...  

Drought is a main factor limiting crop yields. Modern agricultural technologies such as irrigation systems, ground mulching, and rainwater storage can prevent drought, but these are only temporary solutions. Understanding the physiological, biochemical, and molecular reactions of plants to drought stress is therefore urgent. The recent rapid development of genomics tools has led to an increasing interest in phenomics, i.e., the study of phenotypic plant traits. Among phenomic strategies, high-throughput phenotyping (HTP) is attracting increasing attention as a way to address the bottlenecks of genomic and phenomic studies. HTP provides researchers a non-destructive and non-invasive method yet accurate in analyzing large-scale phenotypic data. This review describes plant responses to drought stress and introduces HTP methods that can detect changes in plant phenotypes in response to drought.


Agronomy ◽  
2019 ◽  
Vol 9 (5) ◽  
pp. 258 ◽  
Author(s):  
Aakash Chawade ◽  
Joost van Ham ◽  
Hanna Blomquist ◽  
Oscar Bagge ◽  
Erik Alexandersson ◽  
...  

High-throughput field phenotyping has garnered major attention in recent years leading to the development of several new protocols for recording various plant traits of interest. Phenotyping of plants for breeding and for precision agriculture have different requirements due to different sizes of the plots and fields, differing purposes and the urgency of the action required after phenotyping. While in plant breeding phenotyping is done on several thousand small plots mainly to evaluate them for various traits, in plant cultivation, phenotyping is done in large fields to detect the occurrence of plant stresses and weeds at an early stage. The aim of this review is to highlight how various high-throughput phenotyping methods are used for plant breeding and farming and the key differences in the applications of such methods. Thus, various techniques for plant phenotyping are presented together with applications of these techniques for breeding and cultivation. Several examples from the literature using these techniques are summarized and the key technical aspects are highlighted.


1997 ◽  
pp. 931-935 ◽  
Author(s):  
Anders Lansner ◽  
Örjan Ekeberg ◽  
Erik Fransén ◽  
Per Hammarlund ◽  
Tomas Wilhelmsson

2020 ◽  
Vol 34 (05) ◽  
pp. 9282-9289
Author(s):  
Qingyang Wu ◽  
Lei Li ◽  
Hao Zhou ◽  
Ying Zeng ◽  
Zhou Yu

Many social media news writers are not professionally trained. Therefore, social media platforms have to hire professional editors to adjust amateur headlines to attract more readers. We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers. To train such a neural headline editing model, we collected a dataset which contains articles with original headlines and professionally edited headlines. However, it is expensive to collect a large number of professionally edited headlines. To solve this low-resource problem, we design an encoder-decoder model which leverages large scale pre-trained language models. We further improve the pre-trained model's quality by introducing a headline generation task as an intermediate task before the headline editing task. Also, we propose Self Importance-Aware (SIA) loss to address the different levels of editing in the dataset by down-weighting the importance of easily classified tokens and sentences. With the help of Pre-training, Adaptation, and SIA, the model learns to generate headlines in the professional editor's style. Experimental results show that our method significantly improves the quality of headline editing comparing against previous methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Charles Marks ◽  
Arash Jahangiri ◽  
Sahar Ghanipoor Machiani

Every year, over 50 million people are injured and 1.35 million die in traffic accidents. Risky driving behaviors are responsible for over half of all fatal vehicle accidents. Identifying risky driving behaviors within real-world driving (RWD) datasets is a promising avenue to reduce the mortality burden associated with these unsafe behaviors, but numerous technical hurdles must be overcome to do so. Herein, we describe the implementation of a multistage process for classifying unlabeled RWD data as potentially risky or not. In the first stage, data are reformatted and reduced in preparation for classification. In the second stage, subsets of the reformatted data are labeled as potentially risky (or not) using the Iterative-DBSCAN method. In the third stage, the labeled subsets are then used to fit random forest (RF) classification models—RF models were chosen after they were found to be performing better than logistic regression and artificial neural network models. In the final stage, the RF models are used predictively to label the remaining RWD data as potentially risky (or not). The implementation of each stage is described and analyzed for the classification of RWD data from vehicles on public roads in Ann Arbor, Michigan. Overall, we identified 22.7 million observations of potentially risky driving out of 268.2 million observations. This study provides a novel approach for identifying potentially risky driving behaviors within RWD datasets. As such, this study represents an important step in the implementation of protocols designed to address and prevent the harms associated with risky driving.


Author(s):  
Sacha J. van Albada ◽  
Jari Pronold ◽  
Alexander van Meegen ◽  
Markus Diesmann

AbstractWe are entering an age of ‘big’ computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other’s work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of an ICT infrastructure for neuroscience.


Sign in / Sign up

Export Citation Format

Share Document