scholarly journals On Some Test Statistics for Testing the Regression Coefficients in Presence of Multicollinearity: A Simulation Study

Stats ◽  
2020 ◽  
Vol 3 (1) ◽  
pp. 40-55 ◽  
Author(s):  
Sergio Perez-Melo ◽  
B. M. Golam Kibria

Ridge regression is a popular method to solve the multicollinearity problem for both linear and non-linear regression models. This paper studied forty different ridge regression t-type tests of the individual coefficients of a linear regression model. A simulation study was conducted to evaluate the performance of the proposed tests with respect to their empirical sizes and powers under different settings. Our simulation results demonstrated that many of the proposed tests have type I error rates close to the 5% nominal level and, among those, all tests except one have considerable gain in powers over the standard ordinary least squares (OLS) t-type test. It was observed from our simulation results that seven tests based on some ridge estimators performed better than the rest in terms of achieving higher power gains while maintaining a 5% nominal size.

2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.


2017 ◽  
Vol 41 (4) ◽  
pp. 243-263 ◽  
Author(s):  
Xi Wang ◽  
Yang Liu ◽  
Ronald K. Hambleton

Repeatedly using items in high-stake testing programs provides a chance for test takers to have knowledge of particular items in advance of test administrations. A predictive checking method is proposed to detect whether a person uses preknowledge on repeatedly used items (i.e., possibly compromised items) by using information from secure items that have zero or very low exposure rates. Responses on the secure items are first used to estimate a person’s proficiency distribution, and then the corresponding predictive distribution for the person’s responses on the possibly compromised items is constructed. The use of preknowledge is identified by comparing the observed responses to the predictive distribution. Different estimation methods for obtaining a person’s proficiency distribution and different choices of test statistic in predictive checking are considered. A simulation study was conducted to evaluate the empirical Type I error and power rate of the proposed method. The simulation results suggested that the Type I error of this method is well controlled, and this method is effective in detecting preknowledge when a large proportion of items are compromised even with a short secure section. An empirical example is also presented to demonstrate its practical use.


2019 ◽  
Vol 80 (3) ◽  
pp. 548-577
Author(s):  
William M. Murrah

Multiple regression is often used to compare the importance of two or more predictors. When the predictors being compared are measured with error, the estimated coefficients can be biased and Type I error rates can be inflated. This study explores the impact of measurement error on comparing predictors when one is measured with error, followed by a simulation study to help quantify the bias and Type I error rates for common research situations. Two methods used to adjust for measurement error are demonstrated using a real data example. This study adds to the literature documenting the impact of measurement error on regression modeling, identifying issues particular to the use of multiple regression for comparing predictors, and offers recommendations for researchers conducting such studies.


2020 ◽  
Author(s):  
Georgia Ntani ◽  
Hazel Inskip ◽  
Clive Osmond ◽  
David Coggon

Abstract BackgroundClustering of observations is a common phenomenon in epidemiological and clinical research. Previous studies have highlighted the importance of using multilevel analysis to account for such clustering, but in practice, methods ignoring clustering are often used. We used simulated data to explore the circumstances in which failure to account for clustering in linear regression analysis could lead to importantly erroneous conclusions. MethodsWe simulated data following the random-intercept model specification under different scenarios of clustering of a continuous outcome and a single continuous or binary explanatory variable. We fitted random-intercept (RI) and cluster-unadjusted ordinary least squares (OLS) models and compared the derived estimates of effect, as quantified by regression coefficients, and their estimated precision. We also assessed the extent to which coverage by 95% confidence intervals and rates of Type I error were appropriate. ResultsWe found that effects estimated from OLS linear regression models that ignored clustering were on average unbiased. The precision of effect estimates from the OLS model was overestimated when both the outcome and explanatory variable were continuous. By contrast, in linear regression with a binary explanatory variable, in most circumstances, the precision of effects was somewhat underestimated by the OLS model. The magnitude of bias, both in point estimates and their precision, increased with greater clustering of the outcome variable, and was influenced also by the amount of clustering in the explanatory variable. The cluster-unadjusted model resulted in poor coverage rates by 95% confidence intervals and high rates of Type I error especially when the explanatory variable was continuous. ConclusionsIn this study we identified situations in which an OLS regression model is more likely to affect statistical inference, namely when the explanatory variable is continuous, and its intraclass correlation coefficient is higher than 0.01. Situations in which statistical inference is less likely to be affected have also been identified.


2019 ◽  
Vol 16 (1) ◽  
Author(s):  
Chioneso Marange ◽  
Yongsong Qin

The application of goodness-of-fit (GoF) tests in linear regression modeling is a common practice in applied statistical sciences. For instance, in simple linear regression the assumption of normality of residuals is always necessary to test before making any further inferences. The growing popularity of the use of powerful and efficient empirical likelihood ratio (ELR) based GoF tests in checking for departures from normality in various continuous distributions can be of great use in checking for distributional assumptions of residuals in linear models. Motivated by the attractive properties of the ELR based GoF tests the researchers conducted an extensive Type I error rate assessment as well as a Monte Carlo power comparison of selected ELR GoF tests with well-known existing tests against symmetric and asymmetric alternative OLS and BLUS residuals. Under the simulated scenarios, all the studied tests have good control of Type I error rates. The Monte Carlo experiments revealed the superiority of the ELR GoF tests under certain alternatives of both the OLS and BLUS residuals. Our findings also demonstrated the superiority of OLS over BLUS residuals when one is testing for normality in simple linear regression models. A real data study further revealed the applicability of the ELR based GoF tests in testing normality of residuals in linear regression models.


Methodology ◽  
2015 ◽  
Vol 11 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Jochen Ranger ◽  
Jörg-Tobias Kuhn

In this manuscript, a new approach to the analysis of person fit is presented that is based on the information matrix test of White (1982) . This test can be interpreted as a test of trait stability during the measurement situation. The test follows approximately a χ2-distribution. In small samples, the approximation can be improved by a higher-order expansion. The performance of the test is explored in a simulation study. This simulation study suggests that the test adheres to the nominal Type-I error rate well, although it tends to be conservative in very short scales. The power of the test is compared to the power of four alternative tests of person fit. This comparison corroborates that the power of the information matrix test is similar to the power of the alternative tests. Advantages and areas of application of the information matrix test are discussed.


Sign in / Sign up

Export Citation Format

Share Document