scholarly journals Comparison of Some Robust Wilks’ Statistics for the One-Way Multivariate Analysis of Variance (MANOVA )

Author(s):  
Abdullah A. Ameen ◽  
Osama H. Abbas

The classicalWilks' statistic is mostly used to test hypothesesin the one-way multivariate analysis of variance (MANOVA), which is highly sensitive to the effects of outliers. The non-robustness of the test statistics based on normal theory has led many authors to examine various options.In this paper, we presented a robust version of the Wilks' statistic and constructed its approximate distribution.A comparison was made between the proposed statistics and some Wilks' statistics. The Monte Carlo studies are used to obtain performance assessment of test statistics in different data sets.Moreover, the results of the type I error rate and the power of test were considered as statistical tools to compare test statistics.The study reveals that, under normally distributed, the type I error rates for the classical and the proposedWilks' statistics are close to the true significance levels, and the power of the test statistics are so close. In addition, in the case of contaminated distribution, the proposed statistic is the best.  

1997 ◽  
Vol 85 (1) ◽  
pp. 193-194
Author(s):  
Peter Hassmén

Violation of the sphericity assumption in repeated-measures analysis of variance can lead to positively biased tests, i.e., the likelihood of a Type I error exceeds the alpha level set by the user. Two widely applicable solutions exist, the use of an epsilon-corrected univariate analysis of variance or the use of a multivariate analysis of variance. It is argued that the latter method offers advantages over the former.


1992 ◽  
Vol 17 (4) ◽  
pp. 315-339 ◽  
Author(s):  
Michael R. Harwell ◽  
Elaine N. Rubinstein ◽  
William S. Hayes ◽  
Corley C. Olds

Meta-analytic methods were used to integrate the findings of a sample of Monte Carlo studies of the robustness of the F test in the one- and two-factor fixed effects ANOVA models. Monte Carlo results for the Welch (1947) and Kruskal-Wallis (Kruskal & Wallis, 1952) tests were also analyzed. The meta-analytic results provided strong support for the robustness of the Type I error rate of the F test when certain assumptions were violated. The F test also showed excellent power properties. However, the Type I error rate of the F test was sensitive to unequal variances, even when sample sizes were equal. The error rate of the Welch test was insensitive to unequal variances when the population distribution was normal, but nonnormal distributions tended to inflate its error rate and to depress its power. Meta-analytic and exact statistical theory results were used to summarize the effects of assumption violations for the tests.


1992 ◽  
Vol 17 (4) ◽  
pp. 297-313 ◽  
Author(s):  
Michael R. Harwell

Monte Carlo studies provide information that can assist researchers in selecting a statistical test when underlying assumptions of the test are violated. Effective use of this literature is hampered by the lack of an overarching theory to guide the interpretation of Monte Carlo studies. The problem is exacerbated by the impressionistic nature of the studies, which can lead different readers to different conclusions. These shortcomings can be addressed using meta-analytic methods to integrate the results of Monte Carlo studies. Quantitative summaries of the effects of assumption violations on the Type I error rate and power of a test can assist researchers in selecting the best test for their data. Such summaries can also be used to evaluate the validity of previously published statistical results. This article provides a methodological framework for quantitatively integrating Type I error rates and power values for Monte Carlo studies. An example is provided using Monte Carlo studies of Bartlett’s (1937) test of equality of variances. The importance of relating meta-analytic results to exact statistical theory is emphasized.


2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Can Ateş ◽  
Özlem Kaymaz ◽  
H. Emre Kale ◽  
Mustafa Agah Tekindal

In this study, we investigate how Wilks’ lambda, Pillai’s trace, Hotelling’s trace, and Roy’s largest root test statistics can be affected when the normal and homogeneous variance assumptions of the MANOVA method are violated. In other words, in these cases, the robustness of the tests is examined. For this purpose, a simulation study is conducted in different scenarios. In different variable numbers and different sample sizes, considering the group variances are homogeneous σ12=σ22=⋯=σg2 and heterogeneous (increasing) σ12<σ22<⋯<σg2, random numbers are generated from Gamma(4-4-4; 0.5), Gamma(4-9-36; 0.5), Student’s t(2), and Normal(0; 1) distributions. Furthermore, the number of observations in the groups being balanced and unbalanced is also taken into account. After 10000 repetitions, type-I error values are calculated for each test for α = 0.05. In the Gamma distribution, Pillai’s trace test statistic gives more robust results in the case of homogeneous and heterogeneous variances for 2 variables, and in the case of 3 variables, Roy’s largest root test statistic gives more robust results in balanced samples and Pillai’s trace test statistic in unbalanced samples. In Student’s t distribution, Pillai’s trace test statistic gives more robust results in the case of homogeneous variance and Wilks’ lambda test statistic in the case of heterogeneous variance. In the normal distribution, in the case of homogeneous variance for 2 variables, Roy’s largest root test statistic gives relatively more robust results and Wilks’ lambda test statistic for 3 variables. Also in the case of heterogeneous variance for 2 and 3 variables, Roy’s largest root test statistic gives robust results in the normal distribution. The test statistics used with MANOVA are affected by the violation of homogeneity of covariance matrices and normality assumptions particularly from unbalanced number of observations.


1984 ◽  
Vol 9 (3) ◽  
pp. 193-213
Author(s):  
M. Austin Betz ◽  
Steven D. Elliott

The method of unweighted means in the multivariate analysis of variance with unequal sample sizes is investigated. By approximating the distribution of the hypothesis SSCP with a Wishart distribution, multivariate test statistics are derived that are analogous to the usual ones except the eigenvalues and hypothesis degrees of freedom are adjusted in accordance with the discrepancies in sample size. Monte Carlo methods are used to show the approximate test statistics are accurate over a range of conditions. Conditions are given under which the method of unweighted means yields exact results. A numerical example illustrates the technique.


2016 ◽  
Vol 27 (3) ◽  
pp. 905-919
Author(s):  
Anne Buu ◽  
L Keoki Williams ◽  
James J Yang

We propose a new genome-wide association test for mixed binary and continuous phenotypes that uses an efficient numerical method to estimate the empirical distribution of the Fisher’s combination statistic under the null hypothesis. Our simulation study shows that the proposed method controls the type I error rate and also maintains its power at the level of the permutation method. More importantly, the computational efficiency of the proposed method is much higher than the one of the permutation method. The simulation results also indicate that the power of the test increases when the genetic effect increases, the minor allele frequency increases, and the correlation between responses decreases. The statistical analysis on the database of the Study of Addiction: Genetics and Environment demonstrates that the proposed method combining multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests.


2004 ◽  
Vol 3 (1) ◽  
pp. 1-69 ◽  
Author(s):  
Sandrine Dudoit ◽  
Mark J. van der Laan ◽  
Katherine S. Pollard

The present article proposes general single-step multiple testing procedures for controlling Type I error rates defined as arbitrary parameters of the distribution of the number of Type I errors, such as the generalized family-wise error rate. A key feature of our approach is the test statistics null distribution (rather than data generating null distribution) used to derive cut-offs (i.e., rejection regions) for these test statistics and the resulting adjusted p-values. For general null hypotheses, corresponding to submodels for the data generating distribution, we identify an asymptotic domination condition for a null distribution under which single-step common-quantile and common-cut-off procedures asymptotically control the Type I error rate, for arbitrary data generating distributions, without the need for conditions such as subset pivotality. Inspired by this general characterization of a null distribution, we then propose as an explicit null distribution the asymptotic distribution of the vector of null value shifted and scaled test statistics. In the special case of family-wise error rate (FWER) control, our method yields the single-step minP and maxT procedures, based on minima of unadjusted p-values and maxima of test statistics, respectively, with the important distinction in the choice of null distribution. Single-step procedures based on consistent estimators of the null distribution are shown to also provide asymptotic control of the Type I error rate. A general bootstrap algorithm is supplied to conveniently obtain consistent estimators of the null distribution. The special cases of t- and F-statistics are discussed in detail. The companion articles focus on step-down multiple testing procedures for control of the FWER (van der Laan et al., 2004b) and on augmentations of FWER-controlling methods to control error rates such as tail probabilities for the number of false positives and for the proportion of false positives among the rejected hypotheses (van der Laan et al., 2004a). The proposed bootstrap multiple testing procedures are evaluated by a simulation study and applied to genomic data in the fourth article of the series (Pollard et al., 2004).


2019 ◽  
Author(s):  
Axel Mayer ◽  
Felix Thoemmes

The analysis of variance (ANOVA) is still one of the most widely used statistical methods in the social sciences. This paper is about stochastic group weights in ANOVA models – a neglected aspect in the literature. Stochastic group weights are present whenever the experimenter does not determine the exact group sizes before conducting the experiment. We show that classic ANOVA tests based on estimated marginal means can have an inflated type I error rate when stochastic group weights are not taken into account, even in randomized experiments. We propose two new ways to incorporate stochastic group weights in the tests of average effects - one based on the general linear model and one based on multigroup structural equation models (SEMs). We show in simulation studies that our methods have nominal type I error rates in experiments with stochastic group weights while classic approaches show an inflated type I error rate. The SEM approach can additionally deal with heteroscedastic residual variances and latent variables. An easy-to-use software package with graphical user interface is provided.


Sign in / Sign up

Export Citation Format

Share Document