scholarly journals Automatic glomerular identification and quantification of histological phenotypes using image analysis and machine learning

2018 ◽  
Vol 315 (6) ◽  
pp. F1644-F1651 ◽  
Author(s):  
Susan M. Sheehan ◽  
Ron Korstanje

Current methods of scoring histological kidney samples, specifically glomeruli, do not allow for collection of quantitative data in a high-throughput and consistent manner. Neither untrained individuals nor computers are presently capable of identifying glomerular features, so expert pathologists must do the identification and score using a categorical matrix, complicating statistical analysis. Critical information regarding overall health and physiology is encoded in these samples. Rapid comprehensive histological scoring could be used, in combination with other physiological measures, to significantly advance renal research. Therefore, we used machine learning to develop a high-throughput method to automatically identify and collect quantitative data from glomeruli. Our method requires minimal human interaction between steps and provides quantifiable data independent of user bias. The method uses free existing software and is usable without extensive image analysis training. Validation of the classifier and feature scores in mice is highlighted in this work and shows the power of applying this method in murine research. Preliminary results indicate that the method can be applied to data sets from different species after training on relevant data, allowing for fast glomerular identification and quantitative measurements of glomerular features. Validation of the classifier and feature scores are highlighted in this work and show the power of applying this method. The resulting data are free from user bias. Continuous data, such that statistical analysis can be performed, allows for more precise and comprehensive interrogation of samples. These data can then be combined with other physiological data to broaden our overall understanding of renal function.

2009 ◽  
Vol 2 ◽  
pp. BII.S2222 ◽  
Author(s):  
David E. Axelrod ◽  
Naomi Miller ◽  
Judith-Anne W. Chapman

Information about tumors is usually obtained from a single assessment of a tumor sample, performed at some point in the course of the development and progression of the tumor, with patient characteristics being surrogates for natural history context. Differences between cells within individual tumors (intratumor heterogeneity) and between tumors of different patients (intertumor heterogeneity) may mean that a small sample is not representative of the tumor as a whole, particularly for solid tumors which are the focus of this paper. This issue is of increasing importance as high-throughput technologies generate large multi-feature data sets in the areas of genomics, proteomics, and image analysis. Three potential pitfalls in statistical analysis are discussed (sampling, cut-points, and validation) and suggestions are made about how to avoid these pitfalls.


Inventions ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 72
Author(s):  
Ryota Sawaki ◽  
Daisuke Sato ◽  
Hiroko Nakayama ◽  
Yuki Nakagawa ◽  
Yasuhito Shimada

Background: Zebrafish are efficient animal models for conducting whole organism drug testing and toxicological evaluation of chemicals. They are frequently used for high-throughput screening owing to their high fecundity. Peripheral experimental equipment and analytical software are required for zebrafish screening, which need to be further developed. Machine learning has emerged as a powerful tool for large-scale image analysis and has been applied in zebrafish research as well. However, its use by individual researchers is restricted due to the cost and the procedure of machine learning for specific research purposes. Methods: We developed a simple and easy method for zebrafish image analysis, particularly fluorescent labelled ones, using the free machine learning program Google AutoML. We performed machine learning using vascular- and macrophage-Enhanced Green Fluorescent Protein (EGFP) fishes under normal and abnormal conditions (treated with anti-angiogenesis drugs or by wounding the caudal fin). Then, we tested the system using a new set of zebrafish images. Results: While machine learning can detect abnormalities in the fish in both strains with more than 95% accuracy, the learning procedure needs image pre-processing for the images of the macrophage-EGFP fishes. In addition, we developed a batch uploading software, ZF-ImageR, for Windows (.exe) and MacOS (.app) to enable high-throughput analysis using AutoML. Conclusions: We established a protocol to utilize conventional machine learning platforms for analyzing zebrafish phenotypes, which enables fluorescence-based, phenotype-driven zebrafish screening.


2019 ◽  
Author(s):  
Simon Artzet ◽  
Tsu-Wei Chen ◽  
Jérôme Chopard ◽  
Nicolas Brichet ◽  
Michael Mielewczik ◽  
...  

AbstractIn the era of high-throughput visual plant phenotyping, it is crucial to design fully automated and flexible workflows able to derive quantitative traits from plant images. Over the last years, several software supports the extraction of architectural features of shoot systems. Yet currently no end-to-end systems are able to extract both 3D shoot topology and geometry of plants automatically from images on large datasets and a large range of species. In particular, these software essentially deal with dicotyledons, whose architecture is comparatively easier to analyze than monocotyledons. To tackle these challenges, we designed the Phenomenal software featured with: (i) a completely automatic workflow system including data import, reconstruction of 3D plant architecture for a range of species and quantitative measurements on the reconstructed plants; (ii) an open source library for the development and comparison of new algorithms to perform 3D shoot reconstruction and (iii) an integration framework to couple workflow outputs with existing models towards model-assisted phenotyping. Phenomenal analyzes a large variety of data sets and species from images of high-throughput phenotyping platform experiments to published data obtained in different conditions and provided in a different format. Phenomenal has been validated both on manual measurements and synthetic data simulated by 3D models. It has been also tested on other published datasets to reproduce a published semi-automatic reconstruction workflow in an automatic way. Phenomenal is available as an open-source software on a public repository.


2020 ◽  
Author(s):  
Miguel de la Varga ◽  
Florian Wellmann

<p>As the number of underground activities increase, the need for better understanding of the geospatial properties become more and more essential for correct engineering designs and optimal decision making. However, gathering subsurface data is still an extremely costly and imprecise endeavour. Geological modelling has played a crucial role for years helping to understand and correlate the complex geometries encountered underground but single deterministic models fail to capture all possible configurations given the limited data. Probabilistic machine learning allows to integrate domain knowledge and observations of the physical world on a rigorous and consistent manner. Inferences to the probabilistic model implements an automatic learning-from-observations process.</p><p> </p><p>In this work, we show how by embedding state-of-the-art implicit interpolants into probabilistic frameworks, we can integrate the information of distinct data sets in one single common earth model. We will present results from a minimal working example to introduce Bayesian statistics, to full 3-D probabilistic inversions. All the models used for this demonstration are implemented in the open-source library GemPy ( www.gempy.org) allowing full reproducibility of the results.</p><p> </p>


PLoS ONE ◽  
2018 ◽  
Vol 13 (4) ◽  
pp. e0196615 ◽  
Author(s):  
Unseok Lee ◽  
Sungyul Chang ◽  
Gian Anantrio Putra ◽  
Hyoungseok Kim ◽  
Dong Hwan Kim

2021 ◽  
Vol 33 (17) ◽  
pp. 6918-6924
Author(s):  
Ye Sheng ◽  
Tingting Deng ◽  
Pengfei Qiu ◽  
Xun Shi ◽  
Jinyang Xi ◽  
...  

Author(s):  
Aaron Bivins ◽  
Devrim Kaya ◽  
Kyle Bibby ◽  
Stuart Simpson ◽  
Stephen Bustin ◽  
...  

The coronavirus disease 2019 (COVID-19) pandemic has led to wastewater surveillance becoming an important tool for monitoring the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) within communities. As a result, molecular methods, in particular reverse transcription-quantitative PCR (RT-qPCR), have been employed to generate large data sets aimed at the detection and quantification of SARS-CoV-2 in wastewater. Although RT-qPCR is rapid and sensitive, there is no standard method that fits all use cases, there are no certified quantification standards and experiments are carried out using numerous different assays, reagents, instruments, and data analysis protocols. These variations can lead to the reporting of erroneous quantitative data resulting in potentially misleading interpretations and conclusions. We have reviewed the SARS-CoV-2 wastewater surveillance literature focusing on the variability of RT-qPCR data as revealed by inconsistent standard curves and associated parameters. We find that variation in these parameters and deviations from best practices as described in The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines suggest a lack of reproducibility and reliability in quantitative measurements of SARS-CoV-2 RNA in wastewater.


Sign in / Sign up

Export Citation Format

Share Document