Comparison of methods to account for autocorrelation in correlation analyses of fish data

1998 ◽  
Vol 55 (9) ◽  
pp. 2127-2140 ◽  
Author(s):  
Brian J Pyper ◽  
Randall M Peterman

Autocorrelation in fish recruitment and environmental data can complicate statistical inference in correlation analyses. To address this problem, researchers often either adjust hypothesis testing procedures (e.g., adjust degrees of freedom) to account for autocorrelation or remove the autocorrelation using prewhitening or first-differencing before analysis. However, the effectiveness of methods that adjust hypothesis testing procedures has not yet been fully explored quantitatively. We therefore compared several adjustment methods via Monte Carlo simulation and found that a modified version of these methods kept Type I error rates near . In contrast, methods that remove autocorrelation control Type I error rates well but may in some circumstances increase Type II error rates (probability of failing to detect some environmental effect) and hence reduce statistical power, in comparison with adjusting the test procedure. Specifically, our Monte Carlo simulations show that prewhitening and especially first-differencing decrease power in the common situations where low-frequency (slowly changing) processes are important sources of covariation in fish recruitment or in environmental variables. Conversely, removing autocorrelation can increase power when low-frequency processes account for only some of the covariation. We therefore recommend that researchers carefully consider the importance of different time scales of variability when analyzing autocorrelated data.

1992 ◽  
Vol 17 (4) ◽  
pp. 297-313 ◽  
Author(s):  
Michael R. Harwell

Monte Carlo studies provide information that can assist researchers in selecting a statistical test when underlying assumptions of the test are violated. Effective use of this literature is hampered by the lack of an overarching theory to guide the interpretation of Monte Carlo studies. The problem is exacerbated by the impressionistic nature of the studies, which can lead different readers to different conclusions. These shortcomings can be addressed using meta-analytic methods to integrate the results of Monte Carlo studies. Quantitative summaries of the effects of assumption violations on the Type I error rate and power of a test can assist researchers in selecting the best test for their data. Such summaries can also be used to evaluate the validity of previously published statistical results. This article provides a methodological framework for quantitatively integrating Type I error rates and power values for Monte Carlo studies. An example is provided using Monte Carlo studies of Bartlett’s (1937) test of equality of variances. The importance of relating meta-analytic results to exact statistical theory is emphasized.


Methodology ◽  
2009 ◽  
Vol 5 (2) ◽  
pp. 60-70 ◽  
Author(s):  
W. Holmes Finch ◽  
Teresa Davenport

Permutation testing has been suggested as an alternative to the standard F approximate tests used in multivariate analysis of variance (MANOVA). These approximate tests, such as Wilks’ Lambda and Pillai’s Trace, have been shown to perform poorly when assumptions of normally distributed dependent variables and homogeneity of group covariance matrices were violated. Because Monte Carlo permutation tests do not rely on distributional assumptions, they may be expected to work better than their approximate cousins when the data do not conform to the assumptions described above. The current simulation study compared the performance of four standard MANOVA test statistics with their Monte Carlo permutation-based counterparts under a variety of conditions with small samples, including conditions when the assumptions were met and when they were not. Results suggest that for sample sizes of 50 subjects, power is very low for all the statistics. In addition, Type I error rates for both the approximate F and Monte Carlo tests were inflated under the condition of nonnormal data and unequal covariance matrices. In general, the performance of the Monte Carlo permutation tests was slightly better in terms of Type I error rates and power when both assumptions of normality and homogeneous covariance matrices were not met. It should be noted that these simulations were based upon the case with three groups only, and as such results presented in this study can only be generalized to similar situations.


1981 ◽  
Vol 48 (1) ◽  
pp. 19-22 ◽  
Author(s):  
James D. Church ◽  
Edward L. Wike

A Monte Carlo study was done to find the Type I error rates for three nonparametric procedures for making k − 1 many-one comparisons in a one-way design. The tests ( t) were the Silverstein and Steel many-one ranks tests and the two-sample Wilcoxon rank-sum test, k = 3, 5, 7, and 10 treatments were crossed with n = 7, 10, and 15 replicates with 1000 simulations per k, n combination. Analyses of four Type I error rates showed that: (1) The Wilcoxon test had the best comparisonwise error rates; (2) none of the tests functioned well as protected tests; and (3) the Silverstein test had the best experimentwise error rates and was the recommended procedure for many-one tests for a one-way layout.


2016 ◽  
Vol 77 (1) ◽  
pp. 104-118 ◽  
Author(s):  
Mengyang Cao ◽  
Louis Tay ◽  
Yaowu Liu

This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo simulation was conducted across several conditions including the number of response options, test length, sample size, percentage of DIF items, DIF effect size, and type of cumulative DIF. Results indicated that the iterative approach performed well for polytomous data in all conditions, with well-controlled Type I error rates and high power. For dichotomous data, the iterative approach also exhibited better control over Type I error rates than the Wald-2 approach without sacrificing the power in detecting DIF. However, inflated Type I error rates were found for the iterative approach in conditions with dichotomous data, noncompensatory DIF, large percentage of DIF items, and medium to large DIF effect sizes. Nevertheless, the Type I error rates were substantially less inflated in those conditions compared with the Wald-2 approach.


1996 ◽  
Vol 123 (4) ◽  
pp. 333-339 ◽  
Author(s):  
William P. Dunlap ◽  
Tammy Greer ◽  
Gregory O. Beatty

2020 ◽  
Vol 18 (1) ◽  
pp. 2-11
Author(s):  
Michael Harwell

Two common outcomes of Monte Carlo studies in statistics are bias and Type I error rate. Several versions of bias statistics exist but all employ arbitrary cutoffs for deciding when bias is ignorable or non-ignorable. This article argues Type I error rates should be used when assessing bias.


1981 ◽  
Vol 49 (3) ◽  
pp. 931-934
Author(s):  
James D. Church ◽  
Edward L. Wike

A Monte Carlo study was done to find the Type I error rates for three nonparametric procedures for making k – 1 many-one comparisons in a two-way design. The tests were the Silverstein and Steel many-one tests and the two-sample step-down sign test. k = 3, 5, 7, and 10 treatments were crossed with n = 8, 11, and 15 blocks with 1000 simulations per k, n combination. The Silverstein test had the best experimentwise error rates and is recommended for many-one comparisons in a two-way design.


Sign in / Sign up

Export Citation Format

Share Document