Reliability of Validity Generalization Data Bases
This paper addresses the usefulness of reporting coder reliability in validity generalization studies. The Principles for the Validation and Use of Personnel Selection Instruments of the Society for Industrial and Organizational Psychology state that given the results of meta-analytic studies, validities generalize far more than previously believed; however, users of validity generalization results are required to report the reliability of data entering validity generalization analyses. In response to this concern, reliability coefficients were computed on the validity and sample size between two studies (i.e., data bases) of the Wonderlic Personnel Test and the Otis Test of General Mental Ability. These variables, validity, and sample size, were investigated since these are the crucial components in validity generalization analysis. Results indicated that the correlation between the validities of the two studies was .99 and the correlation between the sample sizes of the two studies was 1.00. To illustrate further the reliability of coding in validity generalization research, separate meta-analyses were conducted on the validity of these tests on each of the two data bases. When correcting only for sampling error, the results indicated that the separate meta-analyses yielded identical results, M = .24, SD = .09. These results show that concerns about the reliability of validity generalization data bases are unwarranted and that independent investigators coding the same data, record the same values and obtain the same results.