A Monte Carlo Comparison of Seven ε-Adjustment Procedures in Repeated Measures Designs with Small Sample Sizes

1994 ◽  
Vol 19 (1) ◽  
pp. 57 ◽  
Author(s):  
Stephen M. Quintana ◽  
Scott E. Maxwell
1994 ◽  
Vol 19 (1) ◽  
pp. 57-71 ◽  
Author(s):  
Stephen M. Quintana ◽  
Scott E. Maxwell

The purpose of this study was to evaluate seven univariate procedures for testing omnibus null hypotheses for data gathered from repeated measures designs. Five alternate approaches are compared to the two more traditional adjustment procedures (Geisser and Greenhouse’s ε̂ and Huynh and Feldt’s ε̃), neither of which may be entirely adequate when sample sizes are small and the number of levels of the repeated factors is large. Empirical Type I error rates and power levels were obtained by simulation for conditions where small samples occur in combination with many levels of the repeated factor. Results suggested that alternate univariate approaches were improvements to the traditional approaches. One alternate approach in particular was found to be most effective in controlling Type I error rates without unduly sacrificing power.


2020 ◽  
pp. 096228022097022
Author(s):  
Frank Konietschke ◽  
Karima Schwab ◽  
Markus Pauly

In many experiments and especially in translational and preclinical research, sample sizes are (very) small. In addition, data designs are often high dimensional, i.e. more dependent than independent replications of the trial are observed. The present paper discusses the applicability of max t-test-type statistics (multiple contrast tests) in high-dimensional designs (repeated measures or multivariate) with small sample sizes. A randomization-based approach is developed to approximate the distribution of the maximum statistic. Extensive simulation studies confirm that the new method is particularly suitable for analyzing data sets with small sample sizes. A real data set illustrates the application of the methods.


2002 ◽  
Vol 95 (3) ◽  
pp. 837-842 ◽  
Author(s):  
M. T. Bradley ◽  
D. Smith ◽  
G. Stoica

A Monte-Carlo study was done with true effect sizes in deviation units ranging from 0 to 2 and a variety of sample sizes. The purpose was to assess the amount of bias created by considering only effect sizes that passed a statistical cut-off criterion of α = .05. The deviation values obtained at the .05 level jointly determined by the set effect sizes and sample sizes are presented. This table is useful when summarizing sets of studies to judge whether published results reflect an accurate appraisal of an underlying effect or a distorted estimate expected because significant studies are published and nonsignificant results are not. The table shows that the magnitudes of error are substantial with small sample sizes and inherently small effect sizes. Thus, reviews based on published literature could be misleading and especially so if true effect sizes were close to zero. A researcher should be particularly cautious of small sample sizes showing large effect sizes when larger samples indicate diminishing smaller effects.


Sign in / Sign up

Export Citation Format

Share Document