A flexible pipeline for experimental design, processing, and analysis of microarray data

Author(s):  
S. Osborn ◽  
S. Kennedy ◽  
D. Chin
2006 ◽  
Vol 3 (2) ◽  
pp. 77-89
Author(s):  
Y. E. Pittelkow ◽  
S. R. Wilson

Summary Various statistical models have been proposed for detecting differential gene expression in data from microarray experiments. Given such detection, we are usually interested in describing the differential expression patterns. Due to the large number of genes that are typically analysed in microarray experiments, possibly more than ten thousand, the tasks of interpretation and communication of all the corresponding statistical models pose a considerable challenge, except perhaps in the simplest experiment involving only two groups. A further challenge is to find methods to summarize the resulting models. These challenges increase with experimental complexity.Biologists often wish to sort genes into ‘classes’ with similar response profiles/patterns. So, in this paper we describe a likelihood approach for assigning genes to these different class patterns for data from a replicated experimental design.The number of potential patterns increases very quickly as the number of combinations in the experimental design increases. In a two group experimental design there are only three patterns required to describe the mean response: up, down and no difference. For a factorial design with three treatments there are 13 different patterns, and with four levels there are 75 potential patterns to be considered, and so on. The approach is applied to the identification of differential response patterns in gene expression from a microarray experiment using RNAextracted from the leaves of Arabidopsis thaliana plants. We compare patterns of response found using additive and multiplicative models. A multiplicative model is more commonly used in the statistical analysis of microarray data because of the variance stabilizing properties of the logarithmic function. Then the error structure of the model is taken to be log-Normal. On the other hand, for the additive model the gene expression value is modeled directly as being from a gamma distribution which successfully accounts for the constant coefficient of variation often observed. Appropriate visualization displays for microarray data are important as a way of communicating the patterns of response amongst the genes. Here we use graphical ‘icons’ to represent the patterns of up/down and no response and two alternative displays, the Gene-plot and a grid layout to provide rapid overall summaries of the gene expression patterns.


Author(s):  
Antonio Federico ◽  
Laura Aliisa Saarimäki ◽  
Angela Serra ◽  
Giusy del Giudice ◽  
Pia Anneli Sofia Kinaret ◽  
...  

Author(s):  
B.M. Bolstad ◽  
F. Collin ◽  
K.M. Simpson ◽  
R.A. Irizarry ◽  
T.P. Speed

2005 ◽  
Vol 44 (03) ◽  
pp. 423-430 ◽  
Author(s):  
J. Landgrebe ◽  
E. Brunner ◽  
F. Bretz

Summary Objectives: A variety of linear models have recently been proposed for the design and analysis of micro-array experiments. This article gives an introduction to the most common models and describes their respective characteristics. Methods: We focus on the application of linear models to logarithmized and normalized microarray data from two-color arrays. Linear models can be applied at different stages of evaluating microarray experiments, such as experimental design, background correction, normalization and hypothesis testing. Both one-stage and two-stage linear models including technical and possibly biological replicates are described. Issues related to selecting robust and efficient microarray designs are also discussed. Results: Linear models provide flexible and powerful tools, which are easily implemented and interpreted. The methods are illustrated with an experiment performed in our laboratory, which demonstrates the value of using linear models for the evaluation of current microarray experiments. Conclusions: Linear models provide a flexible approach to properly account for variability, both across and within genes. This allows the experimenter to adequately model the sources of variability, which are assumed to be of major influence on the final measurements. In addition, design considerations essential for any well-planned microarray experiments are best incorporated using linear models. Results from such experimental design investigations show that the widely used common reference design is often substantially less efficient than alternative designs and its use is therefore not recommended.


Author(s):  
Claire H. Wilson ◽  
Anna Tsykin ◽  
Christopher R. Wilkinson ◽  
Catherine A. Abbott

2018 ◽  
Vol 41 ◽  
Author(s):  
Wei Ji Ma

AbstractGiven the many types of suboptimality in perception, I ask how one should test for multiple forms of suboptimality at the same time – or, more generally, how one should compare process models that can differ in any or all of the multiple components. In analogy to factorial experimental design, I advocate for factorial model comparison.


2019 ◽  
Vol 42 ◽  
Author(s):  
J. Alfredo Blakeley-Ruiz ◽  
Carlee S. McClintock ◽  
Ralph Lydic ◽  
Helen A. Baghdoyan ◽  
James J. Choo ◽  
...  

Abstract The Hooks et al. review of microbiota-gut-brain (MGB) literature provides a constructive criticism of the general approaches encompassing MGB research. This commentary extends their review by: (a) highlighting capabilities of advanced systems-biology “-omics” techniques for microbiome research and (b) recommending that combining these high-resolution techniques with intervention-based experimental design may be the path forward for future MGB research.


1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


Sign in / Sign up

Export Citation Format

Share Document