A Monte Carlo Study on the Performance of a Corrected Formula for ɛ̃ Suggested by Lecoutre

1994 ◽  
Vol 19 (2) ◽  
pp. 119-126 ◽  
Author(s):  
Ru San Chen ◽  
William P. Dunlap

Lecoutre (1991) has pointed out an error in the Huynh and Feldt (1976) formula for ɛ̃ used to adjust the degree of freedom for an approximate test in repeated measures designs with two or more independent groups. The present simulation study confirms that Lecoutre’s corrected ɛ̃ yields less biased estimation of population ɛ and reduces Type I error rates when compared to Huynh and Feldt’s (1976) ɛ̃. The increased accuracy in Type I errors for group-treatment interactions may become substantial when sample sizes are close to the number of treatment levels.

1994 ◽  
Vol 19 (1) ◽  
pp. 57-71 ◽  
Author(s):  
Stephen M. Quintana ◽  
Scott E. Maxwell

The purpose of this study was to evaluate seven univariate procedures for testing omnibus null hypotheses for data gathered from repeated measures designs. Five alternate approaches are compared to the two more traditional adjustment procedures (Geisser and Greenhouse’s ε̂ and Huynh and Feldt’s ε̃), neither of which may be entirely adequate when sample sizes are small and the number of levels of the repeated factors is large. Empirical Type I error rates and power levels were obtained by simulation for conditions where small samples occur in combination with many levels of the repeated factor. Results suggested that alternate univariate approaches were improvements to the traditional approaches. One alternate approach in particular was found to be most effective in controlling Type I error rates without unduly sacrificing power.


Stats ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 174-188
Author(s):  
Yoshifumi Ukyo ◽  
Hisashi Noma ◽  
Kazushi Maruo ◽  
Masahiko Gosho

The mixed-effects model for repeated measures (MMRM) approach has been widely applied for longitudinal clinical trials. Many of the standard inference methods of MMRM could possibly lead to the inflation of type I error rates for the tests of treatment effect, when the longitudinal dataset is small and involves missing measurements. We propose two improved inference methods for the MMRM analyses, (1) the Bartlett correction with the adjustment term approximated by bootstrap, and (2) the Monte Carlo test using an estimated null distribution by bootstrap. These methods can be implemented regardless of model complexity and missing patterns via a unified computational framework. Through simulation studies, the proposed methods maintain the type I error rate properly, even for small and incomplete longitudinal clinical trial settings. Applications to a postnatal depression clinical trial are also presented.


1980 ◽  
Vol 5 (3) ◽  
pp. 269-287 ◽  
Author(s):  
Scott E. Maxwell

Five methods of performing pairwise multiple comparisons in repeated measures designs were investigated. Tukey's Wholly Significant Difference (WSD) test, recommended by most experimental design texts, requires that all differences between pairs of means have a common variance. However, this assumption is equivalent to the sphericity condition that is necessary and sufficient for the validity of the mixed-model approach to the omnibus test. Monte Carlo methods revealed that Tukey's WSD leads to an inflated alpha level when the sphericity assumption is not met. Consideration of both Type I and Type II error rates found in the simulated conditions for the five procedures suggests that a Bonferroni method utilizing a separate error term for each comparison should be employed.


1988 ◽  
Vol 13 (3) ◽  
pp. 215-226 ◽  
Author(s):  
H. J. Keselman ◽  
Joanne C. Keselman

Two Tukey multiple comparison procedures as well as a Bonferroni and multivariate approach were compared for their rates of Type I error and any-pairs power when multisample sphericity was not satisfied and the design was unbalanced. Pairwise comparisons of unweighted and weighted repeated measures means were computed. Results indicated that heterogenous covariance matrices in combination with unequal group sizes resulted in substantially inflated rates of Type I error for all MCPs involving comparisons of unweighted means. For tests of weighted means, both the Bonferroni and a multivariate critical value limited the number of Type I errors; however, the Bonferroni procedure provided a more powerful test, particularly when the number of repeated measures treatment levels was large.


1997 ◽  
Vol 85 (1) ◽  
pp. 193-194
Author(s):  
Peter Hassmén

Violation of the sphericity assumption in repeated-measures analysis of variance can lead to positively biased tests, i.e., the likelihood of a Type I error exceeds the alpha level set by the user. Two widely applicable solutions exist, the use of an epsilon-corrected univariate analysis of variance or the use of a multivariate analysis of variance. It is argued that the latter method offers advantages over the former.


1981 ◽  
Vol 48 (1) ◽  
pp. 19-22 ◽  
Author(s):  
James D. Church ◽  
Edward L. Wike

A Monte Carlo study was done to find the Type I error rates for three nonparametric procedures for making k − 1 many-one comparisons in a one-way design. The tests ( t) were the Silverstein and Steel many-one ranks tests and the two-sample Wilcoxon rank-sum test, k = 3, 5, 7, and 10 treatments were crossed with n = 7, 10, and 15 replicates with 1000 simulations per k, n combination. Analyses of four Type I error rates showed that: (1) The Wilcoxon test had the best comparisonwise error rates; (2) none of the tests functioned well as protected tests; and (3) the Silverstein test had the best experimentwise error rates and was the recommended procedure for many-one tests for a one-way layout.


2016 ◽  
Vol 77 (1) ◽  
pp. 104-118 ◽  
Author(s):  
Mengyang Cao ◽  
Louis Tay ◽  
Yaowu Liu

This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo simulation was conducted across several conditions including the number of response options, test length, sample size, percentage of DIF items, DIF effect size, and type of cumulative DIF. Results indicated that the iterative approach performed well for polytomous data in all conditions, with well-controlled Type I error rates and high power. For dichotomous data, the iterative approach also exhibited better control over Type I error rates than the Wald-2 approach without sacrificing the power in detecting DIF. However, inflated Type I error rates were found for the iterative approach in conditions with dichotomous data, noncompensatory DIF, large percentage of DIF items, and medium to large DIF effect sizes. Nevertheless, the Type I error rates were substantially less inflated in those conditions compared with the Wald-2 approach.


2004 ◽  
Vol 3 (3) ◽  
pp. 171-186 ◽  
Author(s):  
Craig H. Mallinckrodt ◽  
Christopher J. Kaiser ◽  
John G. Watkin ◽  
Michael J. Detke ◽  
Geert Molenberghs ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document