scholarly journals Combining deep learning and automated feature extraction to analyze minirhizotron images: development and validation of a new pipeline

2021 ◽  
Author(s):  
Felix M. Bauer ◽  
Lena Lärm ◽  
Shehan Morandage ◽  
Guillaume Lobet ◽  
Jan Vanderborght ◽  
...  

Root systems of crops play a significant role in agro-ecosystems. The root system is essential for water and nutrient uptake, plant stability, symbiosis with microbes and a good soil structure. Minirhizotrons, consisting of transparent tubes that create windows into the soil, have shown to be effective to non-invasively investigate the root system. Root traits, like root length observed around the tubes of minirhizotron, can therefore be obtained throughout the crop growing season. Analyzing datasets from minirhizotrons using common manual annotation methods, with conventional software tools, are time consuming and labor intensive. Therefore, an objective method for high throughput image analysis that provides data for field root-phenotyping is necessary. In this study we developed a pipeline combining state-of-the-art software tools, using deep neural networks and automated feature extraction. This pipeline consists of two major components and was applied to large root image datasets from minirhizotrons. First, a segmentation by a neural network model, trained with a small image sample is performed. Training and segmentation are done using “Root-Painter”. Then, an automated feature extraction from the segments is carried out by “RhizoVision Explorer”. To validate the results of our automated analysis pipeline, a comparison of root length between manually annotated and automatically processed data was realized with more than 58,000 images. Mainly the results show a high correlation (R=0.81) between manually and automatically determined root lengths. With respect to the processing time, our new pipeline outperforms manual annotation by 98.1 - 99.6 %. Our pipeline,combining state-of-the-art software tools, significantly reduces the processing time for minirhizotron images. Thus, image analysis is no longer the bottle-neck in high-throughput phenotyping approaches.

2015 ◽  
Vol 22 (5) ◽  
pp. 993-1000 ◽  
Author(s):  
Sheng Yu ◽  
Katherine P Liao ◽  
Stanley Y Shaw ◽  
Vivian S Gainer ◽  
Susanne E Churchill ◽  
...  

Abstract Objective Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Materials and methods Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. Results The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Discussion Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. Conclusion The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping.


2016 ◽  
Vol 24 (e1) ◽  
pp. e143-e149 ◽  
Author(s):  
Sheng Yu ◽  
Abhishek Chakrabortty ◽  
Katherine P Liao ◽  
Tianrun Cai ◽  
Ashwin N Ananthakrishnan ◽  
...  

Objective: Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. Methods: The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype’s International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Results: Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn’s disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F-score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. Conclusion: SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts.


2017 ◽  
Vol 27 (3) ◽  
pp. 319-324 ◽  
Author(s):  
David H. Suchoff ◽  
Christopher C. Gunter ◽  
Frank J. Louws

At its most basic, grafting is the replacement of one root system with another containing more desirable traits. Grafting of tomato (Solanum lycopersicum) onto disease-resistant rootstocks is an increasingly popular alternative for managing economically damaging soilborne diseases. Although certain rootstocks have demonstrated ancillary benefits in the form of improved tolerance to edaphic abiotic stress, the mechanisms behind the enhanced stress tolerance are not well understood. Specific traits within root system morphology (RSM), in both field crops and vegetables, can improve growth in conditions under abiotic stress. A greenhouse study was conducted to compare the RSM of 17 commercially available tomato rootstocks and one commercial field cultivar (Florida-47). Plants were grown in containers filled with a mixture of clay-based soil conditioner and pool filter sand (2:1 v/v) and harvested at 2, 3, or 4 weeks after emergence. At harvest, roots were cleaned, scanned, and analyzed with an image analysis system. Data collected included total root length (TRL), average root diameter, specific root length (SRL), and relative diameter class. The main effect of cultivar was significant (P ≤ 0.05) for all response variables and the main effect of harvest date was only significant (P ≤ 0.01) for TRL. ‘RST-106’ rootstock had the longest TRL, whereas ‘Beaufort’ had the shortest. ‘BHN-1088’ had the thickest average root diameter, which was 32% thicker than the thinnest, observed in ‘Beaufort’. SRL in ‘Beaufort’ was 60% larger than ‘BHN-1088’. This study demonstrated that gross differences exist in RSM of tomato rootstocks and that, when grown in a solid porous medium, these differences can be determined using an image analysis system.


Plant Methods ◽  
2016 ◽  
Vol 12 (1) ◽  
Author(s):  
Chantal Le Marié ◽  
Norbert Kirchgessner ◽  
Patrick Flütsch ◽  
Johannes Pfeifer ◽  
Achim Walter ◽  
...  

2018 ◽  
Author(s):  
Jonathan Arias-Fuenzalida ◽  
Javier Jarazo ◽  
Jonas Walter ◽  
Gemma Gomez-Giro ◽  
Julia I. Forster ◽  
...  

AbstractAutophagy and mitophagy play a central role in cellular homeostasis. In pathological conditions, the flow of autophagy and mitophagy can be affected at multiple and distinct steps of the pathways. Unfortunately, the level of detail of current state of the art analyses does not allow detection or dissection of pathway intermediates. Moreover, is conducted in low-throughput manner on bulk cell populations. Defining autophagy and mitophagy pathway intermediates in a high-throughput manner is technologically challenging, and has not been addressed so far. Here, we overcome those limitations and developed a novel high-throughput phenotyping platform with automated high-content image analysis to assess autophagy and mitophagy pathway intermediates.


2020 ◽  
Author(s):  
Vricha Chavan ◽  
​Jimit Shah ◽  
Mrugank Vora ◽  
Mrudula Vora ◽  
Shubhashini Verma

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Author(s):  
Xabier Rodríguez-Martínez ◽  
Enrique Pascual-San-José ◽  
Mariano Campoy-Quiles

This review article presents the state-of-the-art in high-throughput computational and experimental screening routines with application in organic solar cells, including materials discovery, device optimization and machine-learning algorithms.


Author(s):  
Yunfei Fu ◽  
Hongchuan Yu ◽  
Chih-Kuo Yeh ◽  
Tong-Yee Lee ◽  
Jian J. Zhang

Brushstrokes are viewed as the artist’s “handwriting” in a painting. In many applications such as style learning and transfer, mimicking painting, and painting authentication, it is highly desired to quantitatively and accurately identify brushstroke characteristics from old masters’ pieces using computer programs. However, due to the nature of hundreds or thousands of intermingling brushstrokes in the painting, it still remains challenging. This article proposes an efficient algorithm for brush Stroke extraction based on a Deep neural network, i.e., DStroke. Compared to the state-of-the-art research, the main merit of the proposed DStroke is to automatically and rapidly extract brushstrokes from a painting without manual annotation, while accurately approximating the real brushstrokes with high reliability. Herein, recovering the faithful soft transitions between brushstrokes is often ignored by the other methods. In fact, the details of brushstrokes in a master piece of painting (e.g., shapes, colors, texture, overlaps) are highly desired by artists since they hold promise to enhance and extend the artists’ powers, just like microscopes extend biologists’ powers. To demonstrate the high efficiency of the proposed DStroke, we perform it on a set of real scans of paintings and a set of synthetic paintings, respectively. Experiments show that the proposed DStroke is noticeably faster and more accurate at identifying and extracting brushstrokes, outperforming the other methods.


Sign in / Sign up

Export Citation Format

Share Document