performance validity assessment
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 9)

H-INDEX

4
(FIVE YEARS 2)

Author(s):  
Thomas Merten ◽  
Brechje Dandachi-FitzGerald ◽  
Vicki Hall ◽  
Thomas Bodner ◽  
Luciano Giromini ◽  
...  

2021 ◽  
Vol 36 (6) ◽  
pp. 1162-1162
Author(s):  
Isabel Munoz ◽  
Daniel W Lopez-Hernandez ◽  
Rachel A Rugh-Fraser ◽  
Amy Bichlmeier ◽  
Abril J Baez ◽  
...  

Abstract Objective Research shows that traumatic brain injury (TBI) patients perform worse than healthy comparisons (HC) on the Symbol Digit Modalities Test (SDMT). We evaluated cut-off scores for a newly developed recognition trial of the SDMT as a performance validity assessment in monolingual and bilingual TBI survivors and HC adults. Method The sample consisted of 43 acute TBI (ATBI; 24 monolinguals; 19 bilinguals), 32 chronic TBI (CTBI; 13 monolinguals; 19 bilinguals), and 57 HC (24 monolinguals; 33 bilinguals) participants. All participants received standardized administration of the SDMT. None of the participants displayed motivation for feigning cognitive deficits. Results The HC group outperformed both TBI groups on the demographically adjusted SDMT scores, p = 0.000, ηp2 = 0.24. An interaction emerged in SDMT scores where monolingual ATBI outperformed bilingual ATBI and bilingual CTBI outperformed monolingual CTBI, p = 0.017, ηp2 = 0.06. No differences were found in the SDMT recognition trial. Both Bichlmeier and Boone’s suggested cut-off scores had different failure rates in ATBI (Bichlmeier: 77%; Boone: 37%), CTBI (Bichlmeier: 69%; Boone: 19%), and HC (Bichlmeier: 56%; Boone: 26%). For the monolingual group (Bichlmeier: 66%; Boone: 36%) and the bilingual group (Bichlmeier: 66%; Boone: 21%). Finally, chi-squared analysis revealed monolingual TBI had greater failure rates than the bilingual ATBI. Conclusion Bichlmeier’s proposed cut-off score resulted in greater failure rates in TBI survivors compared to Boone’s suggested cut-off score. Furthermore, monolingual ATBI were influenced more by Bichlmeier’s cut-off score than the bilingual ATBI group, although the reason for this finding is unclear and requires additional study with a larger sample size.


2021 ◽  
pp. 1-35
Author(s):  
Laszlo A. Erdodi

OBJECTIVE: This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD: Archival data were collected from 167 patients (52.4%male; M Age = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS: MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS: Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.


Author(s):  
Rachel A. Clegg ◽  
Julie K. Lynch ◽  
Maha N. Mian ◽  
Robert J. McCaffrey

2019 ◽  
Vol 35 (2) ◽  
pp. 188-204 ◽  
Author(s):  
Laszlo A Erdodi ◽  
Christopher A Abeare

Abstract Objective This study was designed to evaluate the classification accuracy of a multivariate model of performance validity assessment using embedded validity indicators (EVIs) within the Wechsler Adult Intelligence Scale—Fourth Edition (WAIS-IV). Method Archival data were collected from 100 adults with traumatic brain injury (TBI) consecutively referred for neuropsychological assessment in a clinical setting. The classification accuracy of previously published individual EVIs nested within the WAIS-IV and a composite measure based on six independent EVIs were evaluated against psychometrically defined non-credible performance. Results Univariate validity cutoffs based on age-corrected scaled scores on Coding, Symbol Search, Digit Span, Letter-Number-Sequencing, Vocabulary minus Digit Span, and Coding minus Symbol Search were strong predictors of psychometrically defined non-credible responding. Failing ≥3 of these six EVIs at the liberal cutoff improved specificity (.91–.95) over univariate cutoffs (.78–.93). Conversely, failing ≥2 EVIs at the more conservative cutoff increased and stabilized sensitivity (.43–.67) compared to univariate cutoffs (.11–.63) while maintaining consistently high specificity (.93–.95). Conclusions In addition to being a widely used test of cognitive functioning, the WAIS-IV can also function as a measure of performance validity. Consistent with previous research, combining information from multiple EVIs enhanced the classification accuracy of individual cutoffs and provided more stable parameter estimates. If the current findings are replicated in larger, diagnostically and demographically heterogeneous samples, the WAIS-IV has the potential to become a powerful multivariate model of performance validity assessment. Brief Summary Using a combination of multiple performance validity indicators embedded within the subtests of theWechsler Adult Intelligence Scale, the credibility of the response set can be establishedwith a high level of confidence. Multivariatemodels improve classification accuracy over individual tests. Relying on existing test data is a cost-effective approach to performance validity assessment.


2019 ◽  
Vol 12 (2) ◽  
pp. 127-145 ◽  
Author(s):  
Jonathan D. Lichtenstein ◽  
Matthew K. Greenacre ◽  
Laura Cutler ◽  
Kaitlyn Abeare ◽  
Shannon D. Baker ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document