Monte Carlo tests and randomization.

Author(s):  
Donald Quicke ◽  
Buntika A. Butcher ◽  
Rachel Kruft Welton

Abstract This chapter focuses on Monte Carlo tests and randomization. It involves randomizing the observed numbers many times and comparing the randomized results with the original observed data. It is shown how randomization can be used in experimental design and sampling.

Author(s):  
Michael Laver ◽  
Ernest Sergenti

This chapter develops the methods for designing, executing, and analyzing large suites of computer simulations that generate stable and replicable results. It starts with a discussion of the different methods of experimental design, such as grid sweeping and Monte Carlo parameterization. Next, it demonstrates how to calculate mean estimates of output variables of interest. It does so by first discussing stochastic processes, Markov Chain representations, and model burn-in. It focuses on three stochastic process representations: nonergodic deterministic processes that converge on a single state; nondeterministic stochastic processes for which a time average provides a representative estimate of the output variables; and nondeterministic stochastic processes for which a time average does not provide a representative estimate of the output variables. The estimation strategy employed depends on which stochastic process the simulation follows. Lastly, the chapter presents a set of diagnostic checks used to establish an appropriate sample size for the estimation of the means.


1992 ◽  
Vol 19 (3) ◽  
pp. 188-189 ◽  
Author(s):  
John F. Walsh

Courses in statistics and experimental design can be enhanced through use of crafted data sets. The use of examples highlights the interface between data and statistical routine. FORTRAN programs utilizing the International Mathematical and Statistical Library subroutines permit the user to control the variance—covariance structure of multivariate normal variables and build data sets that have instructional value. Scale transformations and Monte Carlo simulations of the data can be performed as well.


Author(s):  
Russell S. Vaught

Random assignment to treatment is not always possible in evaluative research. The semi-experimental design discussed here has aspects of both full experimental and quasi-experimental designs. Monte Carlo studies are used to explore and exemplify the strengths and weaknesses of the design. The analysis suggested is found to give unbiased estimates of treatment effects and error mean squares but biased estimates of assignment effects. Some further aspects of the design and its use are discussed.


2020 ◽  
Author(s):  
Kristen A. McLaurin ◽  
Amanda J. Fairchild ◽  
Dexin Shi ◽  
Rosemarie M. Booze ◽  
Charles F. Mactutus

AbstractThe translation of preclinical studies to human applications is associated with a high failure rate, which may be exacerbated by limited training in experimental design and statistical analysis. Nested experimental designs, which occur when data have a multilevel structure (e.g., in vitro: cells within a culture dish; in vivo: rats within a litter), often violate the independent observation assumption underlying many traditional statistical techniques. Although previous studies have empirically evaluated the analytic challenges associated with multilevel data, existing work has not focused on key parameters and design components typically observed in preclinical research. To address this knowledge gap, a Monte Carlo simulation study was conducted to systematically assess the effects of inappropriately modeling multilevel data via a fixed effects ANOVA in studies with sparse observations, no between group comparison within a single cluster, and interactive effects. Simulation results revealed a dramatic increase in the probability of type 1 error and relative bias of the standard error as the number of level-1 (e.g., cells; rats) units per cell increased in the fixed effects ANOVA; these effects were largely attenuated when the nesting was appropriately accounted for via a random effects ANOVA. Thus, failure to account for a nested experimental design may lead to reproducibility challenges and inaccurate conclusions. Appropriately accounting for multilevel data, however, may enhance statistical reliability, thereby leading to improvements in translatability. Valid analytic strategies are provided for a variety of design scenarios.


Sign in / Sign up

Export Citation Format

Share Document