1960 ◽  
Vol 106 (445) ◽  
pp. 1486-1492 ◽  
Author(s):  
James Inglis ◽  
Catherine Colwell ◽  
Felix Post

Few studies of psychological tests which have been shown to differentiate between psychiatric groups have attempted to follow-up the patients originally tested so as to determine what the predictive power of the test might be in the light of subsequent events. One of these few studies has been reported by Walton (1958) and commented upon by Inglis (1959) who was able to show that the test Walton described was better able to predict the outcome of illness after a follow-up period of two years, than was the original diagnostic label. The importance of this kind of study has been pointed out most clearly by Payne (1958). His argument is that, “Description is only one implication of the diagnostic label. Can the test score aid the doctor in making a prognosis? This need not be the case. Let us consider the original validation of the test again. The doctor who diagnosed the standardization group of patients might well be able to give a more or less accurate prognosis for this group of patients. In fact there might be a significant relationship between the presence or absence of the label he assigns, and prognosis. We also know that there is a significant correlation between presence or absence of his label and the test scores. This does not prove, however, that there is any relationship whatsoever between the diagnostic test score and prognosis. Two things which correlate with the same thing do not necessarily correlate with each other, unless the correlations concerned are greater than ·7” (1958, p. 27).


1977 ◽  
Vol 31 (2) ◽  
pp. 137-140 ◽  
Author(s):  
S D Walter
Keyword(s):  

1992 ◽  
Vol 2 (1) ◽  
pp. 25-42 ◽  
Author(s):  
Jeffrey M. Voas ◽  
Keith W. Miller
Keyword(s):  

1992 ◽  
Vol 17 (4) ◽  
pp. 297-313 ◽  
Author(s):  
Michael R. Harwell

Monte Carlo studies provide information that can assist researchers in selecting a statistical test when underlying assumptions of the test are violated. Effective use of this literature is hampered by the lack of an overarching theory to guide the interpretation of Monte Carlo studies. The problem is exacerbated by the impressionistic nature of the studies, which can lead different readers to different conclusions. These shortcomings can be addressed using meta-analytic methods to integrate the results of Monte Carlo studies. Quantitative summaries of the effects of assumption violations on the Type I error rate and power of a test can assist researchers in selecting the best test for their data. Such summaries can also be used to evaluate the validity of previously published statistical results. This article provides a methodological framework for quantitatively integrating Type I error rates and power values for Monte Carlo studies. An example is provided using Monte Carlo studies of Bartlett’s (1937) test of equality of variances. The importance of relating meta-analytic results to exact statistical theory is emphasized.


Sign in / Sign up

Export Citation Format

Share Document