Reliability of Validity Generalization Data Bases

1988 ◽  
Vol 63 (1) ◽  
pp. 131-134 ◽  
Author(s):  
Deborah L. Whetzel ◽  
Michael A. Mc Daniel

This paper addresses the usefulness of reporting coder reliability in validity generalization studies. The Principles for the Validation and Use of Personnel Selection Instruments of the Society for Industrial and Organizational Psychology state that given the results of meta-analytic studies, validities generalize far more than previously believed; however, users of validity generalization results are required to report the reliability of data entering validity generalization analyses. In response to this concern, reliability coefficients were computed on the validity and sample size between two studies (i.e., data bases) of the Wonderlic Personnel Test and the Otis Test of General Mental Ability. These variables, validity, and sample size, were investigated since these are the crucial components in validity generalization analysis. Results indicated that the correlation between the validities of the two studies was .99 and the correlation between the sample sizes of the two studies was 1.00. To illustrate further the reliability of coding in validity generalization research, separate meta-analyses were conducted on the validity of these tests on each of the two data bases. When correcting only for sampling error, the results indicated that the separate meta-analyses yielded identical results, M = .24, SD = .09. These results show that concerns about the reliability of validity generalization data bases are unwarranted and that independent investigators coding the same data, record the same values and obtain the same results.

2014 ◽  
Vol 13 (3) ◽  
pp. 123-133 ◽  
Author(s):  
Wiebke Goertz ◽  
Ute R. Hülsheger ◽  
Günter W. Maier

General mental ability (GMA) has long been considered one of the best predictors of training success and considerably better than specific cognitive abilities (SCAs). Recently, however, researchers have provided evidence that SCAs may be of similar importance for training success, a finding supporting personnel selection based on job-related requirements. The present meta-analysis therefore seeks to assess validities of SCAs for training success in various occupations in a sample of German primary studies. Our meta-analysis (k = 72) revealed operational validities between ρ = .18 and ρ = .26 for different SCAs. Furthermore, results varied by occupational category, supporting a job-specific benefit of SCAs.


2017 ◽  
Vol 10 (3) ◽  
pp. 485-488
Author(s):  
Ernest H. O'Boyle

Tett, Hundley, and Christiansen (2017) make a compelling case against meta-analyses that focus on mean effect sizes (e.g., rxy and ρ) while largely disregarding the precision of the estimate and true score variance. This is a reasonable point, but meta-analyses that myopically focus on mean effects at the expense of variance are not examples of validity generalization (VG)—they are examples of bad meta-analyses. VG and situational specificity (SS) fall along a continuum, and claims about generalization are confined to the research question and the type of generalization one is seeking (e.g., directional generalization, magnitude generalization). What Tett et al. (2017) successfully debunk is an extreme position along the generalization continuum significantly beyond the tenets of VG that few, if any, in the research community hold. The position they argue against is essentially a fixed-effects assumption, which runs counter to VG. Describing VG in this way is akin to describing SS as a position that completely ignores sampling error and treats every between-sample difference in effect size as true score variance. Both are strawmen that were knocked down decades ago (Schmidt et al., 1985). There is great value in debating whether a researcher should or can argue for generalization, but this debate must start with (a) an accurate portrayal of VG, (b) a discussion of different forms of generalization, and (c) the costs of trying to establish universal thresholds for VG.


2016 ◽  
Vol 15 (2) ◽  
pp. 45-54 ◽  
Author(s):  
Jason G. Randall ◽  
Anton J. Villado ◽  
Christina U. Zimmer

Abstract. The purpose of this study was to test for race and sex differences in general mental ability (GMA) retest performance and to identify the psychological mechanisms underlying these differences. An initial and retest administration of a GMA assessment separated by a six-week span was completed by 318 participants. Contrary to our predictions, we found that race, sex, and emotional stability failed to moderate GMA retest performance. However, GMA assessed via another ability test and conscientiousness both partially explained retest performance. Additionally, we found that retesting may reduce adverse impact ratios by lowering the hiring threshold. Ultimately, our findings reinforce the need for organizations to consider race, sex, ability, and personality when implementing retesting procedures.


2021 ◽  
Vol 9 (1) ◽  
pp. 8
Author(s):  
Christopher J. Schmank ◽  
Sara Anne Goring ◽  
Kristof Kovacs ◽  
Andrew R. A. Conway

In a recent publication in the Journal of Intelligence, Dennis McFarland mischaracterized previous research using latent variable and psychometric network modeling to investigate the structure of intelligence. Misconceptions presented by McFarland are identified and discussed. We reiterate and clarify the goal of our previous research on network models, which is to improve compatibility between psychological theories and statistical models of intelligence. WAIS-IV data provided by McFarland were reanalyzed using latent variable and psychometric network modeling. The results are consistent with our previous study and show that a latent variable model and a network model both provide an adequate fit to the WAIS-IV. We therefore argue that model preference should be determined by theory compatibility. Theories of intelligence that posit a general mental ability (general intelligence) are compatible with latent variable models. More recent approaches, such as mutualism and process overlap theory, reject the notion of general mental ability and are therefore more compatible with network models, which depict the structure of intelligence as an interconnected network of cognitive processes sampled by a battery of tests. We emphasize the importance of compatibility between theories and models in scientific research on intelligence.


1967 ◽  
Vol 20 (2) ◽  
pp. 488-490 ◽  
Author(s):  
Arden Grotelueschen ◽  
Thomas J. Lyons

Quick Word Test (QWT) total and part scores for 178 adults were correlated with WAIS IQ scores. Pearson rs of .77 and .74 were found between total QWT scores and WAIS verbal and total IQ scores, respectively. Data indicate that the QWT appears to be a valid measure of general mental ability.


2011 ◽  
Vol 34 (6) ◽  
pp. 900-918 ◽  
Author(s):  
Jannica Stålnacke ◽  
Ann-Charlotte Smedler

In Sweden, special needs of high-ability individuals have received little attention. For this purpose, adult Swedes with superior general mental ability (GMA; N = 302), defined by an IQ score > 130 on tests of abstract reasoning, answered a questionnaire regarding their views of themselves and their giftedness. The participants also rated their self-theory of intelligence and completed the Sense of Coherence Scale (SOC-13). At large, the participants experienced being different but felt little need to downplay their giftedness to gain social acceptance. Most participants encompassed an entity self-theory of intelligence, while also recognizing that it takes effort to develop one’s ability. The group scored lower ( p < .001) than Swedes in general on the SOC, which may be a reflection of social difficulties associated with being gifted in an egalitarian society. However, it may also indicate that the SOC carries a different meaning for those with superior GMA.


2017 ◽  
Vol 156 (6) ◽  
pp. 1011-1017 ◽  
Author(s):  
Sarah N. Bowe ◽  
Adrienne M. Laury ◽  
Stacey T. Gray

Objective This systematic review aims to evaluate which applicant characteristics available to an otolaryngology selection committee are associated with future performance in residency or practice. Data Sources PubMed, Scopus, ERIC, Health Business, Psychology and Behavioral Sciences Collection, and SocINDEX. Review Methods Study eligibility was performed by 2 independent investigators in accordance with the PRISMA protocol (Preferred Reporting Items for Systematic Reviews and Meta-analyses). Data obtained from each article included research questions, study design, predictors, outcomes, statistical analysis, and results/findings. Study bias was assessed with the Quality in Prognosis Studies tool. Results The initial search identified 439 abstracts. Six articles fulfilled all inclusion and exclusion criteria. All studies were retrospective cohort studies (level 4). Overall, the studies yielded relatively few criteria that correlated with residency success, with generally conflicting results. Most studies were found to have a high risk of bias. Conclusion Previous resident selection research has lacked a theoretical background, thus predisposing this work to inconsistent results and high risk of bias. The included studies provide historical insight into the predictors and criteria (eg, outcomes) previously deemed pertinent by the otolaryngology field. Additional research is needed, possibly integrating aspects of personnel selection, to engage in an evidence-based approach to identify highly qualified candidates who will succeed as future otolaryngologists.


2005 ◽  
Vol 14 (3) ◽  
pp. 285-292 ◽  
Author(s):  
ROBERT BODIZS ◽  
TAMAS KIS ◽  
ALPAR SANDOR LAZAR ◽  
LINDA HAVRAN ◽  
PETER RIGO ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document