scholarly journals Increasing the Content of High-Content Screening

2014 ◽  
Vol 19 (5) ◽  
pp. 640-650 ◽  
Author(s):  
Shantanu Singh ◽  
Anne E. Carpenter ◽  
Auguste Genovesio

Target-based high-throughput screening (HTS) has recently been critiqued for its relatively poor yield compared to phenotypic screening approaches. One type of phenotypic screening, image-based high-content screening (HCS), has been seen as particularly promising. In this article, we assess whether HCS is as high content as it can be. We analyze HCS publications and find that although the number of HCS experiments published each year continues to grow steadily, the information content lags behind. We find that a majority of high-content screens published so far (60−80%) made use of only one or two image-based features measured from each sample and disregarded the distribution of those features among each cell population. We discuss several potential explanations, focusing on the hypothesis that data analysis traditions are to blame. This includes practical problems related to managing large and multidimensional HCS data sets as well as the adoption of assay quality statistics from HTS to HCS. Both may have led to the simplification or systematic rejection of assays carrying complex and valuable phenotypic information. We predict that advanced data analysis methods that enable full multiparametric data to be harvested for entire cell populations will enable HCS to finally reach its potential.

2011 ◽  
Vol 29 (3) ◽  
pp. 467-491 ◽  
Author(s):  
H. Vanhamäki ◽  
O. Amm

Abstract. We present a review of selected data-analysis methods that are frequently applied in studies of ionospheric electrodynamics and magnetosphere-ionosphere coupling using ground-based and space-based data sets. Our focus is on methods that are data driven (not simulations or statistical models) and can be used in mesoscale studies, where the analysis area is typically some hundreds or thousands of km across. The selection of reviewed methods is such that most combinations of measured input data (electric field, conductances, magnetic field and currents) that occur in practical applications are covered. The techniques are used to solve the unmeasured parameters from Ohm's law and Maxwell's equations, possibly with help of some simplifying assumptions. In addition to reviewing existing data-analysis methods, we also briefly discuss possible extensions that may be used for upcoming data sets.


2022 ◽  
pp. 590-621
Author(s):  
Obinna Chimaobi Okechukwu

In this chapter, a discussion is presented on the latest tools and techniques available for Big Data Visualization. These tools, techniques and methods need to be understood appropriately to analyze Big Data. Big Data is a whole new paradigm where huge sets of data are generated and analyzed based on volume, velocity and variety. Conventional data analysis methods are incapable of processing data of this dimension; hence, it is fundamentally important to be familiar with new tools and techniques capable of processing these datasets. This chapter will illustrate tools available for analysts to process and present Big Data sets in ways that can be used to make appropriate decisions. Some of these tools (e.g., Tableau, RapidMiner, R Studio, etc.) have phenomenal capabilities to visualize processed data in ways traditional tools cannot. The chapter will also aim to explain the differences between these tools and their utilities based on scenarios.


2011 ◽  
Vol 16 (3) ◽  
pp. 338-347 ◽  
Author(s):  
Anne Kümmel ◽  
Paul Selzer ◽  
Martin Beibel ◽  
Hanspeter Gubler ◽  
Christian N. Parker ◽  
...  

High-content screening (HCS) is increasingly used in biomedical research generating multivariate, single-cell data sets. Before scoring a treatment, the complex data sets are processed (e.g., normalized, reduced to a lower dimensionality) to help extract valuable information. However, there has been no published comparison of the performance of these methods. This study comparatively evaluates unbiased approaches to reduce dimensionality as well as to summarize cell populations. To evaluate these different data-processing strategies, the prediction accuracies and the Z′ factors of control compounds of a HCS cell cycle data set were monitored. As expected, dimension reduction led to a lower degree of discrimination between control samples. A high degree of classification accuracy was achieved when the cell population was summarized on well level using percentile values. As a conclusion, the generic data analysis pipeline described here enables a systematic review of alternative strategies to analyze multiparametric results from biological systems.


2005 ◽  
Vol 33 (6) ◽  
pp. 1427-1429 ◽  
Author(s):  
P. Mendes ◽  
D. Camacho ◽  
A. de la Fuente

The advent of large data sets, such as those produced in metabolomics, presents a considerable challenge in terms of their interpretation. Several mathematical and statistical methods have been proposed to analyse these data, and new ones continue to appear. However, these methods often disagree in their analyses, and their results are hard to interpret. A major contributing factor for the difficulties in interpreting these data lies in the data analysis methods themselves, which have not been thoroughly studied under controlled conditions. We have been producing synthetic data sets by simulation of realistic biochemical network models with the purpose of comparing data analysis methods. Because we have full knowledge of the underlying ‘biochemistry’ of these models, we are better able to judge how well the analyses reflect true knowledge about the system. Another advantage is that the level of noise in these data is under our control and this allows for studying how the inferences are degraded by noise. Using such a framework, we have studied the extent to which correlation analysis of metabolomics data sets is capable of recovering features of the biochemical system. We were able to identify four major metabolic regulatory configurations that result in strong metabolite correlations. This example demonstrates the utility of biochemical simulation in the analysis of metabolomics data.


2019 ◽  
pp. 357-385
Author(s):  
Eric Guérin ◽  
Orhun Aydin ◽  
Ali Mahdavi-Amiri

Abstract In this chapter, we provide an overview of different artificial intelligence (AI) and machine learning (ML) techniques and discuss how these techniques have been employed in managing geospatial data sets as they pertain to Digital Earth. We introduce statistical ML methods that are frequently used in spatial problems and their applications. We discuss generative models, one of the hottest topics in ML, to illustrate the possibility of generating new data sets that can be used to train data analysis methods or to create new possibilities for Digital Earth such as virtual reality or augmented reality. We finish the chapter with a discussion of deep learning methods that have high predictive power and have shown great promise in data analysis of geospatial data sets provided by Digital Earth.


Author(s):  
Obinna Chimaobi Okechukwu

In this chapter, a discussion is presented on the latest tools and techniques available for Big Data Visualization. These tools, techniques and methods need to be understood appropriately to analyze Big Data. Big Data is a whole new paradigm where huge sets of data are generated and analyzed based on volume, velocity and variety. Conventional data analysis methods are incapable of processing data of this dimension; hence, it is fundamentally important to be familiar with new tools and techniques capable of processing these datasets. This chapter will illustrate tools available for analysts to process and present Big Data sets in ways that can be used to make appropriate decisions. Some of these tools (e.g., Tableau, RapidMiner, R Studio, etc.) have phenomenal capabilities to visualize processed data in ways traditional tools cannot. The chapter will also aim to explain the differences between these tools and their utilities based on scenarios.


2010 ◽  
Vol 41 (01) ◽  
Author(s):  
HP Müller ◽  
A Unrath ◽  
A Riecker ◽  
AC Ludolph ◽  
J Kassubek

2020 ◽  
Vol 15 (1) ◽  
pp. 217-230
Author(s):  
Untung Widodo

This research was conducted in order to test how much influence product quality, price and brand to the volume of sales at PT. Gemilang Jaya Bella bracelets Spring Bed Semarang. Independent variables include variable product quality, price and brand while the dependent variable is the volume of sales.In determining the data to be studied sampling technique used is the census. Census is a sampling technique when all members of the population used as a sample .. Respondents were selected is the consumer stores PT. Gemilang Jaya Bella bracelets Spring Bed Semarang. Thus obtained sample was 50 respondents. Data analysis methods used to perform hypothesis testing is multiple linear regression analysis.Based on the results of research that has been conducted on all data obtained, the importance of the research that 1). There is a positive and significant effect of the variable quality of the product (X1) to sales volume (Y). 2). There is a positive and significant impact on price variable (X2) to sales volume (Y). 3). There is a positive and significant impact on brand variables (X3) to sales volume (Y). 4). There is a positive and significant effect of the variable distribution channels (X4) to sales volume (Y)


2020 ◽  
Vol 5 (2) ◽  
pp. 219-228
Author(s):  
Edy Sudaryanto

This study aims to identify and analyze opportunities, challenges, constraints and efforts of vocational high school or Sekolah Menengah Kejuruan (SMK) to create graduates especially accounting programs that are able to manage village funds. The object of the study are accounting program students of SMK PGRI 2 Cibinong. Data used in this study are primary data and secondary data. Data is collected using interviews, observation, and documentation. Data analysis methods use data reduction, data display and conclusion drawing/verification. The results of this study show that the SMK PGRI 2 Cibinong Bogor aware of the opportunities for SMK graduates of the accounting program to fill the scarcity of skilled human resources to manage village funds. But teachers have less experience in the practice of village fund accounting so that they do not have confidence in  teaching. Other constraints are less discussion of government accounting in the accounting syllabus and the absence of a standard handbook/module for teachers to teach accounting subjects.


Sign in / Sign up

Export Citation Format

Share Document