scholarly journals Automation in Cytomics: A Modern RDBMS Based Platform for Image Analysis and Management in High-Throughput Screening Experiments

Author(s):  
E. Larios ◽  
Y. Zhang ◽  
K. Yan ◽  
Z. Di ◽  
S. LeDévédec ◽  
...  
2013 ◽  
Vol 19 (3) ◽  
pp. 344-353 ◽  
Author(s):  
Keith R. Shockley

Quantitative high-throughput screening (qHTS) experiments can simultaneously produce concentration-response profiles for thousands of chemicals. In a typical qHTS study, a large chemical library is subjected to a primary screen to identify candidate hits for secondary screening, validation studies, or prediction modeling. Different algorithms, usually based on the Hill equation logistic model, have been used to classify compounds as active or inactive (or inconclusive). However, observed concentration-response activity relationships may not adequately fit a sigmoidal curve. Furthermore, it is unclear how to prioritize chemicals for follow-up studies given the large uncertainties that often accompany parameter estimates from nonlinear models. Weighted Shannon entropy can address these concerns by ranking compounds according to profile-specific statistics derived from estimates of the probability mass distribution of response at the tested concentration levels. This strategy can be used to rank all tested chemicals in the absence of a prespecified model structure, or the approach can complement existing activity call algorithms by ranking the returned candidate hits. The weighted entropy approach was evaluated here using data simulated from the Hill equation model. The procedure was then applied to a chemical genomics profiling data set interrogating compounds for androgen receptor agonist activity.


Inventions ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 72
Author(s):  
Ryota Sawaki ◽  
Daisuke Sato ◽  
Hiroko Nakayama ◽  
Yuki Nakagawa ◽  
Yasuhito Shimada

Background: Zebrafish are efficient animal models for conducting whole organism drug testing and toxicological evaluation of chemicals. They are frequently used for high-throughput screening owing to their high fecundity. Peripheral experimental equipment and analytical software are required for zebrafish screening, which need to be further developed. Machine learning has emerged as a powerful tool for large-scale image analysis and has been applied in zebrafish research as well. However, its use by individual researchers is restricted due to the cost and the procedure of machine learning for specific research purposes. Methods: We developed a simple and easy method for zebrafish image analysis, particularly fluorescent labelled ones, using the free machine learning program Google AutoML. We performed machine learning using vascular- and macrophage-Enhanced Green Fluorescent Protein (EGFP) fishes under normal and abnormal conditions (treated with anti-angiogenesis drugs or by wounding the caudal fin). Then, we tested the system using a new set of zebrafish images. Results: While machine learning can detect abnormalities in the fish in both strains with more than 95% accuracy, the learning procedure needs image pre-processing for the images of the macrophage-EGFP fishes. In addition, we developed a batch uploading software, ZF-ImageR, for Windows (.exe) and MacOS (.app) to enable high-throughput analysis using AutoML. Conclusions: We established a protocol to utilize conventional machine learning platforms for analyzing zebrafish phenotypes, which enables fluorescence-based, phenotype-driven zebrafish screening.


2013 ◽  
Vol 19 (4) ◽  
pp. 855-866 ◽  
Author(s):  
Pierre-Marc Juneau ◽  
Alain Garnier ◽  
Carl Duchesne

AbstractAcquiring and processing phase-contrast microscopy images in wide-field long-term live-cell imaging and high-throughput screening applications is still a challenge as the methodology and algorithms used must be fast, simple to use and tune, and as minimally intrusive as possible. In this paper, we developed a simple and fast algorithm to compute the cell-covered surface (degree of confluence) in phase-contrast microscopy images. This segmentation algorithm is based on a range filter of a specified size, a minimum range threshold, and a minimum object size threshold. These parameters were adjusted in order to maximize the F-measure function on a calibration set of 200 hand-segmented images, and its performance was compared with other algorithms proposed in the literature. A set of one million images from 37 myoblast cell cultures under different conditions were processed to obtain their cell-covered surface against time. The data were used to fit exponential and logistic models, and the analysis showed a linear relationship between the kinetic parameters and passage number and highlighted the effect of culture medium quality on cell growth kinetics. This algorithm could be used for real-time monitoring of cell cultures and for high-throughput screening experiments upon adequate tuning.


Author(s):  
Renata Rachide Nunes ◽  
Amanda Luisa da Fonseca ◽  
Ana Claudia de Souza Pinto ◽  
Eduardo Habib Bechelane Maia ◽  
Alisson Marques da Silva ◽  
...  

2021 ◽  
Author(s):  
Carolina Nunes ◽  
Jasper Anckaert ◽  
Fanny De Vloed ◽  
Jolien De Wyn ◽  
Kaat Durinck ◽  
...  

Biomedical researchers are moving towards high-throughput screening, as this allows for automatization, better reproducibility and more and faster results. High-throughput screening experiments encompass drug, drug combination, genetic perturbagen or a combination of genetic and chemical perturbagen screens. These experiments are conducted in real-time assays over time or in an endpoint assay. The data analysis consists of data cleaning and structuring, as well as further data processing and visualisation, which, due to the amount of data, can easily become laborious, time consuming, and error-prone. Therefore, several tools have been developed to aid researchers in this data analysis, but they focus on specific experimental set-ups and are unable to process data of several time points and genetic-chemical perturbagen screens together. To meet these needs, we developed HTSplotter, available as web tool and Python module, that performs automatic data analysis and visualisation of either endpoint or real-time assays from different high-throughput screening experiments: drug, drug combination, genetic perturbagen and genetic-chemical perturbagen screens. HTSplotter implements an algorithm based on conditional statements in order to identify experiment type and controls. After appropriate data normalization, HTSplotter executes downstream analyses such as dose-response relationship and drug synergism by the Bliss independence method. All results are exported as a text file and plots are saved in a PDF file. The main advantage of HTSplotter over other available tools is the automatic analysis of genetic-chemical perturbagen screens and real-time assays where results are plotted over time. In conclusion, HTSplotter allows for the automatic end-to-end data processing, analysis and visualisation of various high-throughput in vitro cell culture screens, offering major improvements in terms of versatility, convenience and time over existing tools.


2002 ◽  
Vol 7 (4) ◽  
pp. 341-351 ◽  
Author(s):  
Michael F.M. Engels ◽  
Luc Wouters ◽  
Rudi Verbeeck ◽  
Greet Vanhoof

A data mining procedure for the rapid scoring of high-throughput screening (HTS) compounds is presented. The method is particularly useful for monitoring the quality of HTS data and tracking outliers in automated pharmaceutical or agrochemical screening, thus providing more complete and thorough structure-activity relationship (SAR) information. The method is based on the utilization of the assumed relationship between the structure of the screened compounds and the biological activity on a given screen expressed on a binary scale. By means of a data mining method, a SAR description of the data is developed that assigns probabilities of being a hit to each compound of the screen. Then, an inconsistency score expressing the degree of deviation between the adequacy of the SAR description and the actual biological activity is computed. The inconsistency score enables the identification of potential outliers that can be primed for validation experiments. The approach is particularly useful for detecting false-negative outliers and for identifying SAR-compliant hit/nonhit borderline compounds, both of which are classes of compounds that can contribute substantially to the development and understanding of robust SARs. In a first implementation of the method, one- and two-dimensional descriptors are used for encoding molecular structure information and logistic regression for calculating hits/nonhits probability scores. The approach was validated on three data sets, the first one from a publicly available screening data set and the second and third from in-house HTS screening campaigns. Because of its simplicity, robustness, and accuracy, the procedure is suitable for automation.


2006 ◽  
Vol 7 (3) ◽  
pp. 299-309 ◽  
Author(s):  
Xiaohua Douglas Zhang ◽  
Xiting Cindy Yang ◽  
Namjin Chung ◽  
Adam Gates ◽  
Erica Stec ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document