scholarly journals Future Directions in Performance Validity Assessment to Optimize Detection of Invalid Neuropsychological Test Performance: Special Issue Introduction

Author(s):  
Jason R. Soble
Author(s):  
Thomas Merten ◽  
Brechje Dandachi-FitzGerald ◽  
Vicki Hall ◽  
Thomas Bodner ◽  
Luciano Giromini ◽  
...  

2019 ◽  
Vol 12 (2) ◽  
pp. 127-145 ◽  
Author(s):  
Jonathan D. Lichtenstein ◽  
Matthew K. Greenacre ◽  
Laura Cutler ◽  
Kaitlyn Abeare ◽  
Shannon D. Baker ◽  
...  

Assessment ◽  
2018 ◽  
Vol 27 (7) ◽  
pp. 1399-1415 ◽  
Author(s):  
Troy A. Webber ◽  
Edan A. Critchfield ◽  
Jason R. Soble

To supplement memory-based Performance Validity Tests (PVTs) in identifying noncredible performance, we examined the validity of the two most commonly used nonmemory-based PVTs—Dot Counting Test (DCT) and Wechsler Adult Intelligence Scale–Fourth edition (WAIS-IV) Reliable Digit Span (RDS)—as well as two alternative WAIS-IV Digit Span (DS) subtest PVTs. Examinees completed DCT, WAIS-IV DS, and the following criterion PVTs: Test of Memory Malingering, Word Memory Test, and Word Choice Test. Validity groups were determined by passing 3 (valid; n = 69) or failing ⩾2 (noncredible; n = 30) criterion PVTs. DCT, RDS, RDS–Revised (RDS-R), and WAIS-IV DS Age-Corrected Scaled Score (ACSS) were significantly correlated (but uncorrelated with memory-based PVTs). Combining RDS, RDS-R, and ACSS with DCT improved classification accuracy (particularly for DCT/ACSS) for detecting noncredible performance among valid-unimpaired, but largely not valid-impaired examinees. Combining DCT with ACSS may uniquely assess and best supplement memory-based PVTs to identify noncredible neuropsychological test performance in cognitively unimpaired examinees.


2012 ◽  
Vol 18 (4) ◽  
pp. 625-630 ◽  
Author(s):  
Glenn J. Larrabee

AbstractFailure to evaluate the validity of an examinee's neuropsychological test performance can alter prediction of external criteria in research investigations, and in the individual case, result in inaccurate conclusions about the degree of impairment resulting from neurological disease or injury. The terms performance validity referring to validity of test performance (PVT), and symptom validity referring to validity of symptom report (SVT), are suggested to replace less descriptive terms such as effort or response bias. Research is reviewed demonstrating strong diagnostic discrimination for PVTs and SVTs, with a particular emphasis on minimizing false positive errors, facilitated by identifying performance patterns or levels of performance that are atypical for bona fide neurologic disorder. It is further shown that false positive errors decrease, with a corresponding increase in the positive probability of malingering, when multiple independent indicators are required for diagnosis. The rigor of PVT and SVT research design is related to a high degree of reproducibility of results, and large effect sizes of d=1.0 or greater, exceeding effect sizes reported for several psychological and medical diagnostic procedures. (JINS, 2012, 18, 1–7)


2017 ◽  
pp. 67-109 ◽  
Author(s):  
Justin B. Miller ◽  
Bradley N. Axelrod ◽  
Christian Schutte ◽  
Jeremy J. Davis

2020 ◽  
Vol 35 (6) ◽  
pp. 1005-1005
Author(s):  
Rhoads T ◽  
Resch Z ◽  
Ovsiew G ◽  
Soble J

Abstract Objective Recent evidence suggested the traditional “rounding” procedure used to calculate Dot Counting Test (DCT) E-score cut-offs provides little advantage and may inadvertently lower test sensitivity. This study examined whether DCT psychometric properties differ when E-score values are rounded. Method This cross-sectional study included 132 mixed neuropsychiatric patients who completed the DCT during outpatient evaluation. The sample was 55% female/45% male and 36% Caucasian/35% African American/20% Hispanic/7% Asian/2% other, with mean age of 44.4 (SD=16.1), and mean education of 14.0 years (SD=2.5). In total, 105 (80%) had valid neuropsychological test performance and 27 (20%) had invalid performance based on 4 independent criterion performance validity tests. Results In the overall sample, receiver operating characteristic (ROC) curve analyses yielded significant areas under the curve (.802-.817) for both rounded and unrounded E-score values with respective optimal cut-scores of ≥19 and ≥19.73, both producing 44% sensitivity/93% specificity. Among cognitively impaired patients, ROC curve analyses yielded significant AUCs (.756-.764), and suggested the same cut-scores and sensitivities, albeit with minimally reduced specificity (traditional: 91%; unrounded: 92%). In contrast, more liberal cut-scores of ≥13 (traditional) and ≥13.745 (unrounded) were indicated among cognitively unimpaired patients (AUCs: .880-.906), and sensitivity was notably improved (traditional: 74%; unrounded: 67%) with equivalent specificity (90%). Conclusions Findings from the overall sample suggested marginally better classification accuracy for the traditional E-score, though both methods demonstrated comparable psychometric properties. The optimal cut-score for cognitively unimpaired patients replicated findings from prior literature, but a higher cut-score was indicated for cognitively impaired patients.


Sign in / Sign up

Export Citation Format

Share Document