Simple F Tests for Main Effects and Interactions in 2 × 2 × k Contingency Tables

1982 ◽  
Vol 51 (3) ◽  
pp. 683-692
Author(s):  
John E. Overall ◽  
Stephen J. O'Keefe ◽  
Robert R. Starbuck

A method of controlling for the effects of a nuisance variable in testing the significance of treatment effects on a discrete binary response is described. Proportions of “success” responses in two treatment groups are standardized relative to an estimate of the sampling variance at each level of the concomitant variable, and an unweighted-means analysis of variance is used to test the main effect for treatments and the interaction of treatments × levels. Exact calculations and Monte Carlo results are presented which show the proposed F tests to have actual Type I error probabilities that are closer to the nominal alpha level than is true for alternative tests. The actual Type I error rates are less seriously affected by differences in marginal probabilities of “success” and “failure” responses than is true for other tests, and in the face of small cell frequencies the standardized-means analysis of variance appears to have substantially greater power than the other tests most commonly used with 2 × 2 × k contingency tables.

1994 ◽  
Vol 19 (2) ◽  
pp. 91-101 ◽  
Author(s):  
Ralph A. Alexander ◽  
Diane M. Govern

A new approximation is proposed for testing the equality of k independent means in the face of heterogeneity of variance. Monte Carlo simulations show that the new procedure has Type I error rates that are very nearly nominal and Type II error rates that are quite close to those produced by James’s (1951) second-order approximation. In addition, it is computationally the simplest approximation yet to appear, and it is easily applied to Scheffé (1959) -type multiple contrasts and to the calculation of approximate tail probabilities.


2019 ◽  
Author(s):  
Axel Mayer ◽  
Felix Thoemmes

The analysis of variance (ANOVA) is still one of the most widely used statistical methods in the social sciences. This paper is about stochastic group weights in ANOVA models – a neglected aspect in the literature. Stochastic group weights are present whenever the experimenter does not determine the exact group sizes before conducting the experiment. We show that classic ANOVA tests based on estimated marginal means can have an inflated type I error rate when stochastic group weights are not taken into account, even in randomized experiments. We propose two new ways to incorporate stochastic group weights in the tests of average effects - one based on the general linear model and one based on multigroup structural equation models (SEMs). We show in simulation studies that our methods have nominal type I error rates in experiments with stochastic group weights while classic approaches show an inflated type I error rate. The SEM approach can additionally deal with heteroscedastic residual variances and latent variables. An easy-to-use software package with graphical user interface is provided.


2020 ◽  
Vol 29 (9) ◽  
pp. 2569-2582
Author(s):  
Miguel A García-Pérez ◽  
Vicente Núñez-Antón

Controversy over the validity of significance tests in the analysis of contingency tables is motivated by the disagreement between asymptotic and exact p values and its dependence on the magnitude of expected frequencies. Variants of Pearson’s X2 statistic and their asymptotic distributions were proposed to overcome the difficulties, but several approaches also exist to conduct exact tests. This paper shows that discrepant asymptotic and exact results may or may not occur whether expected frequencies are large or small: Eventual inaccuracy of asymptotic p values is instead caused by idiosyncrasies of the discrete distribution of X2. More importantly, discrepancies are also artificially created by the hypergeometric sampling model used to perform exact tests. Exact computations under the alternative full-multinomial or product-multinomial models require eliminating nuisance parameters and we propose a novel method that integrates them out. The resultant exact distributions are very accurately approximated by the asymptotic distribution, which eliminates concerns about the accuracy of the latter. We also discuss that the two-stage approach that tests for significance of residuals conditional on a significant X2 test is inadvisable and that an alternative single-stage test preserves Type-I error rates and further eliminates concerns about asymptotic accuracy.


Author(s):  
Abdullah A. Ameen ◽  
Osama H. Abbas

The classicalWilks' statistic is mostly used to test hypothesesin the one-way multivariate analysis of variance (MANOVA), which is highly sensitive to the effects of outliers. The non-robustness of the test statistics based on normal theory has led many authors to examine various options.In this paper, we presented a robust version of the Wilks' statistic and constructed its approximate distribution.A comparison was made between the proposed statistics and some Wilks' statistics. The Monte Carlo studies are used to obtain performance assessment of test statistics in different data sets.Moreover, the results of the type I error rate and the power of test were considered as statistical tools to compare test statistics.The study reveals that, under normally distributed, the type I error rates for the classical and the proposedWilks' statistics are close to the true significance levels, and the power of the test statistics are so close. In addition, in the case of contaminated distribution, the proposed statistic is the best.  


1985 ◽  
Vol 10 (4) ◽  
pp. 345-367 ◽  
Author(s):  
Samuel L. Seaman ◽  
James Algina ◽  
Stephen F. Olejnik

Empirical type I error rates and the power of the parametric and rank transform ANCOVA were compared for situations involving conditional distributions that differed between groups in skew and/or scale. For the conditions investigated in the study, the parametric ANCOVA was typically the procedure of choice both as a test of equality of conditional means and as a test of equality of conditional distributions. For those conditions in which rank ANCOVA was the procedure of choice, the power advantages were usually quite small.


1978 ◽  
Vol 46 (2) ◽  
pp. 387-392 ◽  
Author(s):  
Michael E. Campbell

A variety of statistical procedures have been recommended for the evaluation of treatment effects in correlated sequential measures. Analysis of covariance is generally favored and recognized as an appropriate procedure although other techniques have been suggested as simpler and essentially equivalent. This paper examines several of these techniques and reports results of a Monte Carlo study comparing Type I error rate per experiment for the following procedures (1) analysis of variance of difference scores, (2) analysis of variance of final scores, and (3) analysis of covariance. It is shown that of these procedures only analysis of covariance produces error rates close to chance. Analysis of covariance is recommended for the assessment of treatment effects in data with pretreatment-posttreatment measures of the dependent variable and where experimental control is not possible.


2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.


2021 ◽  
pp. 001316442199489
Author(s):  
Luyao Peng ◽  
Sandip Sinharay

Wollack et al. (2015) suggested the erasure detection index (EDI) for detecting fraudulent erasures for individual examinees. Wollack and Eckerly (2017) and Sinharay (2018) extended the index of Wollack et al. (2015) to suggest three EDIs for detecting fraudulent erasures at the aggregate or group level. This article follows up on the research of Wollack and Eckerly (2017) and Sinharay (2018) and suggests a new aggregate-level EDI by incorporating the empirical best linear unbiased predictor from the literature of linear mixed-effects models (e.g., McCulloch et al., 2008). A simulation study shows that the new EDI has larger power than the indices of Wollack and Eckerly (2017) and Sinharay (2018). In addition, the new index has satisfactory Type I error rates. A real data example is also included.


Sign in / Sign up

Export Citation Format

Share Document