scholarly journals Comparison of tests on covariance structures of normal populations

2020 ◽  
Vol 42 ◽  
pp. e44456
Author(s):  
Isabella Marianne Costa Campos ◽  
Denismar Alves Nogueira ◽  
Eric Batista Ferreira ◽  
Davi Butturi-Gomes

In some studies, there is interest in testing the variance structure, as in the context of multivariate or modelling techniques. Therefore, the importance of using hypothesis tests on covariance structures is emphasized. The purpose of this study was to perform a detailed performance study regarding the power and type I error rate of some existing identity and sphericity tests, considering the scenarios with different numbers of variables (2 to 64) and sample sizes (5 to 100). The proposal of Ledoit and Wolf (2002) is the most appropriate to test the identity structure. For the sphericity test, the version of John (1972), modified by Ledoit and Wolf (2002), followed by the proposal of Box (1949), were the ones with the best performance.

2009 ◽  
Vol 40 (1) ◽  
pp. 21-32 ◽  
Author(s):  
G. H. Muller ◽  
B. W. Steyn-Bruwer ◽  
W. D. Hamman

In 2006, Steyn-Bruwer and Hamman highlighted several deficiencies in previous research which investigated the prediction of corporate failure (or financial distress) of companies. In their research, Steyn-Bruwer and Hamman made use of the population of companies for the period under review and not only a sample of bankrupt versus successful companies. Here the sample of bankrupt versus successful companies is considered as two extremes on the continuum of financial condition, while the population is considered as the entire continuum of financial condition.The main objective of this research, which was based on the above-mentioned authors’ work, was to test whether some modelling techniques would in fact provide better prediction accuracies than other modelling techniques. The different modelling techniques considered were: Multiple discriminant analysis (MDA), Recursive partitioning (RP), Logit analysis (LA) and Neural networks (NN).From the literature survey it was evident that existing literature did not readily consider the number of Type I and Type II errors made. As such, this study introduces a novel concept (not seen in other research) called the “Normalised Cost of Failure” (NCF) which takes cognisance of the fact that a Type I error typically costs 20 to 38 times that of a Type II error.In terms of the main research objective, the results show that different analysis techniques definitely produce different predictive accuracies. Here, the MDA and RP techniques correctly predict the most “failed” companies, and consequently have the lowest NCF; while the LA and NN techniques provide the best overall predictive accuracy.


Methodology ◽  
2013 ◽  
Vol 9 (4) ◽  
pp. 129-136 ◽  
Author(s):  
Pablo Livacic-Rojas ◽  
Guillermo Vallejo ◽  
Paula Fernández ◽  
Ellián Tuero-Herrero

We examined the selection of covariance structures and the Type I error rates of the Criterion Selector Akaike’s (Akaike’s Information Criteria, AIC) and the Correctly Identified Model (CIM). Data were analyzed with a split-plot design through the Monte Carlo simulation method and SAS 9.1 statistical software. We manipulated the following variables: sample size, relation between group size and dispersion matrix size, type of dispersion matrix, and form of the distribution. Our findings suggest that AIC selects heterogeneous covariance structure more frequently than original covariance structure. Specifically, AIC mostly selected heterogeneous covariance structures and displayed slightly higher Type I error rates than the CIM. These were mostly associated with main and interaction effects for the ARH and RC structures and a marked tendency toward liberality. Future research needs to assess the power levels exhibited by covariance structure selectors.


1980 ◽  
Vol 5 (4) ◽  
pp. 337-349 ◽  
Author(s):  
Philip H. Ramsey

It is noted that disagreements have arisen in the literature about the robustness of the t test in normal populations with unequal variances. Hsu's procedure is applied to determine exact Type I error rates for t. Employing fairly liberal but objective standards for assessing robustness, it is shown that the t test is not always robust to the assumption of equal population variances even when sample sizes are equal. Several guidelines are suggested including the point that to apply t at α = .05 without regard for unequal variances would require equal sample sizes of at least 15 by one of the standards considered. In many cases, especially those with unequal N's, an alternative such as Welch's procedure is recommended.


2020 ◽  
Vol 18 (2) ◽  
pp. 2-30
Author(s):  
Diep Nguyen ◽  
Eunsook Kim ◽  
Yan Wang ◽  
Thanh Vinh Pham ◽  
Yi-Hsin Chen ◽  
...  

Although the Analysis of Variance (ANOVA) F test is one of the most popular statistical tools to compare group means, it is sensitive to violations of the homogeneity of variance (HOV) assumption. This simulation study examines the performance of thirteen tests in one-factor ANOVA models in terms of their Type I error rate and statistical power under numerous (82,080) conditions. The results show that when HOV was satisfied, the ANOVA F or the Brown-Forsythe test outperformed the other methods in terms of both Type I error control and statistical power even under non-normality. When HOV was violated, the Structured Means Modeling (SMM) with Bartlett or SMM with Maximum Likelihood was strongly recommended for the omnibus test of group mean equality.


2008 ◽  
Vol 32 (1) ◽  
pp. 157-166 ◽  
Author(s):  
Roberta Bessa Veloso Silva ◽  
Daniel Furtado Ferreira ◽  
Denismar Alves Nogueira

The present work emphasizes the importance of testing hypothesis on homogeneity of covariance matrices from multivariate k populations. The violation of the assumption of the homogeneity of covariance matrices affects the performance of the tests and the coverage probability of the confidence regions. This work intends to apply two tests of homogeneity of covariance and to evaluate type I error rates and power using Monte Carlo simulation in normal populations and robustness in non normal populations. Multivariate Bartlett's test (MBT) and its bootstrap version (MBTB) were used. Different configurations are tested combining sample sizes, number of variates, correlation and number of populations. Results show that the bootstrap test was considered superior to the asymptotic test and robust, since it controls the type error I rate.


2000 ◽  
Vol 14 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Joni Kettunen ◽  
Niklas Ravaja ◽  
Liisa Keltikangas-Järvinen

Abstract We examined the use of smoothing to enhance the detection of response coupling from the activity of different response systems. Three different types of moving average smoothers were applied to both simulated interbeat interval (IBI) and electrodermal activity (EDA) time series and to empirical IBI, EDA, and facial electromyography time series. The results indicated that progressive smoothing increased the efficiency of the detection of response coupling but did not increase the probability of Type I error. The power of the smoothing methods depended on the response characteristics. The benefits and use of the smoothing methods to extract information from psychophysiological time series are discussed.


Methodology ◽  
2012 ◽  
Vol 8 (1) ◽  
pp. 23-38 ◽  
Author(s):  
Manuel C. Voelkle ◽  
Patrick E. McKnight

The use of latent curve models (LCMs) has increased almost exponentially during the last decade. Oftentimes, researchers regard LCM as a “new” method to analyze change with little attention paid to the fact that the technique was originally introduced as an “alternative to standard repeated measures ANOVA and first-order auto-regressive methods” (Meredith & Tisak, 1990, p. 107). In the first part of the paper, this close relationship is reviewed, and it is demonstrated how “traditional” methods, such as the repeated measures ANOVA, and MANOVA, can be formulated as LCMs. Given that latent curve modeling is essentially a large-sample technique, compared to “traditional” finite-sample approaches, the second part of the paper addresses the question to what degree the more flexible LCMs can actually replace some of the older tests by means of a Monte-Carlo simulation. In addition, a structural equation modeling alternative to Mauchly’s (1940) test of sphericity is explored. Although “traditional” methods may be expressed as special cases of more general LCMs, we found the equivalence holds only asymptotically. For practical purposes, however, no approach always outperformed the other alternatives in terms of power and type I error, so the best method to be used depends on the situation. We provide detailed recommendations of when to use which method.


Methodology ◽  
2015 ◽  
Vol 11 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Jochen Ranger ◽  
Jörg-Tobias Kuhn

In this manuscript, a new approach to the analysis of person fit is presented that is based on the information matrix test of White (1982) . This test can be interpreted as a test of trait stability during the measurement situation. The test follows approximately a χ2-distribution. In small samples, the approximation can be improved by a higher-order expansion. The performance of the test is explored in a simulation study. This simulation study suggests that the test adheres to the nominal Type-I error rate well, although it tends to be conservative in very short scales. The power of the test is compared to the power of four alternative tests of person fit. This comparison corroborates that the power of the information matrix test is similar to the power of the alternative tests. Advantages and areas of application of the information matrix test are discussed.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


Sign in / Sign up

Export Citation Format

Share Document