validity generalization
Recently Published Documents


TOTAL DOCUMENTS

148
(FIVE YEARS 7)

H-INDEX

25
(FIVE YEARS 1)

2020 ◽  
pp. 129-143
Author(s):  
Kenneth S. Shultz ◽  
David J. Whitney ◽  
Michael J. Zickar

2020 ◽  
Vol 10 (2) ◽  
pp. 70-81
Author(s):  
Khalid ALMamari ◽  
Anne Traynor

Abstract. The Air Force Qualifying Test (AFOQT) has been the primary selection test battery for officer candidates in the US Air Force since 1953. Despite a wealth of literature on the validity of the AFOQT in predicting pilot performance, there is less evidence on its validity generalization. This study investigated the predictive validity of 16 AFOQT subtests and its Pilot composite via psychometric meta-analytic procedures. Based on 32 independent samples from 26 studies, results indicated that pilot performance is best predicted by subtests indicative of perceptual speed, aviation-related aptitude and knowledge, and quantitative ability constructs, and least predicted by subtests indicative of verbal ability construct. Evidence for validity generalization of AFOQT subtests is presented, and implications for practical use are discussed.


2020 ◽  
pp. 153450842092658 ◽  
Author(s):  
Jeremy R. Sullivan ◽  
Victor Villarreal ◽  
Evette Flores ◽  
Alyssa Gomez ◽  
Blaire Warren

This article documents the results of a meta-analysis of available correlational validity evidence for the Social Skills Improvement System Performance Screening Guide (SSIS-PSG), which is a brief teacher-completed rating scale designed to be used as part of universal screening procedures. Article inclusion criteria included (a) published in English in a peer-reviewed journal, (b) administration of the PSG, and (c) provided validity evidence representative of the relationship between PSG scores and scores on related variables. Ten studies yielding 147 correlation coefficients met criteria for inclusion. Data were extracted following established procedures in validity generalization and meta-analytic research. Extracted coefficients were of the expected direction and magnitude with theoretically aligned constructs, thereby providing evidence of convergent validity (e.g., PSG Math and Reading items were most strongly correlated with academic performance and academic behavior variables, with effect sizes ranging from .708 to .740; PSG Prosocial Behavior and Motivation to Learn items were most strongly correlated with broadband externalizing/internalizing problems, with effect sizes ranging from −.706 to −.717), although Prosocial Behavior and Motivation to Learn were not as effective at discriminating among divergent constructs. These results generally support the utility of the PSG in correlating with academic and social/behavioral outcomes in the schools.


2020 ◽  
pp. 31-56
Author(s):  
Martín Sánchez-Jankowski ◽  
Corey M. Abramson

The specter of positivism looms large in both the discussion and the practice of sociological research. Ethnographic traditions such as grounded theory and the extended case method have long emphasized how their approaches provide a critical alternative to the typically quantitative approaches grounded in the conventional scientific tradition (CST) descendent from positivism. In contrast, this chapter takes a different approach by showing how and why an approach to participant observation drawing on behavioralist principles serves a necessary and irreducible role in the realist variable-based approach that has succeeded positivism as the standard for mainline social science. However, addressing CST concerns about validity, generalization, and replication involves more than a symbolic gesture toward these issues or critiques of other methods. Participant observers must employ a rigorous approach to multisite sampling, leverage comparison, and employ reproducible observational techniques to systematically analyze continuity and variation in human behavior. While acknowledging that this can be difficult in the current intellectual environment, this chapter argues that the payoff is substantial—when done well this form of ethnography provides unparalleled resources for observing causal mechanisms in situ, producing robust models that link micro-, meso-, and macro-level social processes, and reducing inferential error in explanations of behavioral patterns.


2019 ◽  
Vol 45 (3) ◽  
pp. 340-357 ◽  
Author(s):  
Jill M Plevinsky ◽  
Ana M Gutierrez-Colina ◽  
Julia K Carmody ◽  
Kevin A Hommel ◽  
Lori E Crosby ◽  
...  

Abstract Objective Treatment adherence is approximately 50% across pediatric conditions. Patient-reported outcomes (PROs) are the most common method of measuring adherence and self-management across research and clinical contexts. The aim of this systematic review is to evaluate adherence and self-management PROs, including measures of adherence behaviors, adherence barriers, disease management skills, and treatment responsibility. Methods Following PRISMA guidelines for systematic reviews, literature searches were performed. Measures meeting inclusion/exclusion criteria were evaluated using Hunsley and Mash’s (2018) criteria for evidence-based assessment across several domains (e.g., internal consistency, interrater reliability, test–retest reliability, content validity, construct validity, validity generalization, treatment sensitivity, and clinical utility). Rating categories were adapted for the present study to include the original categories of adequate, good, and excellent, as well as an additional category of below adequate. Results After screening 172 articles, 50 PROs across a variety of pediatric conditions were reviewed and evaluated. Most measures demonstrated at least adequate content validity (n = 44), internal consistency (n = 34), and validity generalization (n = 45). Findings were mixed regarding interrater reliability, test–retest reliability, and treatment sensitivity. Less than half of the measures (n = 22) exhibited adequate, good, or excellent construct validity. Conclusions Although use of adherence and self-management PROs is widespread across several pediatric conditions, few PROs achieved good or excellent ratings based on rigorous psychometric standards. Validation and replication studies with larger, more diverse samples are needed. Future research should consider the use of emerging technologies to enhance the feasibility of broad implementation.


2019 ◽  
Vol 51 (Supplement) ◽  
pp. 370
Author(s):  
Zezhao Chen ◽  
Xiaofei Wang ◽  
Hai Yan ◽  
Xiong Qin ◽  
Jingyuan Zhu ◽  
...  

2017 ◽  
Vol 10 (3) ◽  
pp. 485-488
Author(s):  
Ernest H. O'Boyle

Tett, Hundley, and Christiansen (2017) make a compelling case against meta-analyses that focus on mean effect sizes (e.g., rxy and ρ) while largely disregarding the precision of the estimate and true score variance. This is a reasonable point, but meta-analyses that myopically focus on mean effects at the expense of variance are not examples of validity generalization (VG)—they are examples of bad meta-analyses. VG and situational specificity (SS) fall along a continuum, and claims about generalization are confined to the research question and the type of generalization one is seeking (e.g., directional generalization, magnitude generalization). What Tett et al. (2017) successfully debunk is an extreme position along the generalization continuum significantly beyond the tenets of VG that few, if any, in the research community hold. The position they argue against is essentially a fixed-effects assumption, which runs counter to VG. Describing VG in this way is akin to describing SS as a position that completely ignores sampling error and treats every between-sample difference in effect size as true score variance. Both are strawmen that were knocked down decades ago (Schmidt et al., 1985). There is great value in debating whether a researcher should or can argue for generalization, but this debate must start with (a) an accurate portrayal of VG, (b) a discussion of different forms of generalization, and (c) the costs of trying to establish universal thresholds for VG.


Sign in / Sign up

Export Citation Format

Share Document