Analysis of Type I Error Rates of Univariate and Multivariate Procedures in Repeated Measures Designs

2010 ◽  
Vol 39 (3) ◽  
pp. 624-640 ◽  
Author(s):  
Pablo Livacic-Rojas ◽  
Guillermo Vallejo ◽  
Paula Fernández
1994 ◽  
Vol 19 (1) ◽  
pp. 57-71 ◽  
Author(s):  
Stephen M. Quintana ◽  
Scott E. Maxwell

The purpose of this study was to evaluate seven univariate procedures for testing omnibus null hypotheses for data gathered from repeated measures designs. Five alternate approaches are compared to the two more traditional adjustment procedures (Geisser and Greenhouse’s ε̂ and Huynh and Feldt’s ε̃), neither of which may be entirely adequate when sample sizes are small and the number of levels of the repeated factors is large. Empirical Type I error rates and power levels were obtained by simulation for conditions where small samples occur in combination with many levels of the repeated factor. Results suggested that alternate univariate approaches were improvements to the traditional approaches. One alternate approach in particular was found to be most effective in controlling Type I error rates without unduly sacrificing power.


1994 ◽  
Vol 19 (2) ◽  
pp. 119-126 ◽  
Author(s):  
Ru San Chen ◽  
William P. Dunlap

Lecoutre (1991) has pointed out an error in the Huynh and Feldt (1976) formula for ɛ̃ used to adjust the degree of freedom for an approximate test in repeated measures designs with two or more independent groups. The present simulation study confirms that Lecoutre’s corrected ɛ̃ yields less biased estimation of population ɛ and reduces Type I error rates when compared to Huynh and Feldt’s (1976) ɛ̃. The increased accuracy in Type I errors for group-treatment interactions may become substantial when sample sizes are close to the number of treatment levels.


2004 ◽  
Vol 3 (3) ◽  
pp. 171-186 ◽  
Author(s):  
Craig H. Mallinckrodt ◽  
Christopher J. Kaiser ◽  
John G. Watkin ◽  
Michael J. Detke ◽  
Geert Molenberghs ◽  
...  

2011 ◽  
Vol 14 (2) ◽  
pp. 1023-1049 ◽  
Author(s):  
Miguel A. García-Pérez ◽  
Rocío Alcalá-Quintana

Solving theoretical or empirical issues sometimes involves establishing the equality of two variables with repeated measures. This defies the logic of null hypothesis significance testing, which aims at assessing evidence against the null hypothesis of equality, not for it. In some contexts, equivalence is assessed through regression analysis by testing for zero intercept and unit slope (or simply for unit slope in case that regression is forced through the origin). This paper shows that this approach renders highly inflated Type I error rates under the most common sampling models implied in studies of equivalence. We propose an alternative approach based on omnibus tests of equality of means and variances and in subject-by-subject analyses (where applicable), and we show that these tests have adequate Type I error rates and power. The approach is illustrated with a re-analysis of published data from a signal detection theory experiment with which several hypotheses of equivalence had been tested using only regression analysis. Some further errors and inadequacies of the original analyses are described, and further scrutiny of the data contradict the conclusions raised through inadequate application of regression analyses.


2019 ◽  
Vol 3 ◽  
Author(s):  
Nicolas Haverkamp ◽  
André Beauducel

  To derive recommendations on how to analyze longitudinal data, we examined Type I error rates of Multilevel Linear Models (MLM) and repeated measures Analysis of Variance (rANOVA) using SAS and SPSS. We performed a simulation with the following specifications: To explore the effects of high numbers of measurement occasions and small sample sizes on Type I error, measurement occasions of m = 9 and 12 were investigated as well as sample sizes of n = 15, 20, 25 and 30. Effects of non-sphericity in the population on Type I error were also inspected: 5,000 random samples were drawn from two populations containing neither a within-subject nor a between-group effect. They were analyzed including the most common options to correct rANOVA and MLM-results: The Huynh-Feldt-correction for rANOVA (rANOVA-HF) and the Kenward-Roger-correction for MLM (MLM-KR), which could help to correct progressive bias of MLM with an unstructured covariance matrix (MLM-UN). Moreover, uncorrected rANOVA and MLM assuming a compound symmetry covariance structure (MLM-CS) were also taken into account. The results showed a progressive bias for MLM-UN for small samples which was stronger in SPSS than in SAS. Moreover, an appropriate bias correction for Type I error via rANOVA-HF and an insufficient correction by MLM-UN-KR for n < 30 were found. These findings suggest MLM-CS or rANOVA if sphericity holds and a correction of a violation via rANOVA-HF. If an analysis requires MLM, SPSS yields more accurate Type I error rates for MLM-CS and SAS yields more accurate Type I error rates for MLM-UN.


Sign in / Sign up

Export Citation Format

Share Document