Convergent, Discriminant, and Concurrent Validity of Nonmemory-Based Performance Validity Tests

Assessment ◽  
2018 ◽  
Vol 27 (7) ◽  
pp. 1399-1415 ◽  
Author(s):  
Troy A. Webber ◽  
Edan A. Critchfield ◽  
Jason R. Soble

To supplement memory-based Performance Validity Tests (PVTs) in identifying noncredible performance, we examined the validity of the two most commonly used nonmemory-based PVTs—Dot Counting Test (DCT) and Wechsler Adult Intelligence Scale–Fourth edition (WAIS-IV) Reliable Digit Span (RDS)—as well as two alternative WAIS-IV Digit Span (DS) subtest PVTs. Examinees completed DCT, WAIS-IV DS, and the following criterion PVTs: Test of Memory Malingering, Word Memory Test, and Word Choice Test. Validity groups were determined by passing 3 (valid; n = 69) or failing ⩾2 (noncredible; n = 30) criterion PVTs. DCT, RDS, RDS–Revised (RDS-R), and WAIS-IV DS Age-Corrected Scaled Score (ACSS) were significantly correlated (but uncorrelated with memory-based PVTs). Combining RDS, RDS-R, and ACSS with DCT improved classification accuracy (particularly for DCT/ACSS) for detecting noncredible performance among valid-unimpaired, but largely not valid-impaired examinees. Combining DCT with ACSS may uniquely assess and best supplement memory-based PVTs to identify noncredible neuropsychological test performance in cognitively unimpaired examinees.

2020 ◽  
Vol 35 (6) ◽  
pp. 1000-1000
Author(s):  
Schroeder R ◽  
Soden D ◽  
Clark H ◽  
Martin P

Abstract Objective Outside of Reliable Digit Span (RDS), there has been minimal research examining the utility of Digit Span (DS) score combinations from the Wechsler Adult Intelligence Scale—4th Edition (WAIS-IV) as possible performance validity tests (PVTs). We sought to determine if other DS scores/score combinations might work more effectively than RDS as a PVT. Method Patients included 318 individuals who completed neuropsychological evaluations. Individuals were excluded if they were not administered DS; were not administered at least 4 criterion PVTs; had diagnoses of dementia, intellectual disability, or left hemisphere cerebrovascular accident; or had indeterminate validity results (i.e., failure of one PVT). Valid performers (n = 248) were those who passed all criterion PVTs while invalid performers (n = 70) failed two or more criterion PVTs. Receiver operating characteristic curves were conducted for multiple DS indices. Results Area under the curve (AUC) was highest for the DS index that combined raw scores from all three trials (Digit Span Raw; AUC = .821). Likewise, when examining cutoffs that maintained 90% specificity for each DS index, a Digit Span Raw cutoff of < 20 produced the highest sensitivity rate (52%) of all indices. For comparison, RDS, RDS with sequencing, and DS scaled score had AUC values of .758, .802, and .811, respectively. When maintaining specificity at 90%, sensitivity rates were 28%, 43%, and 43%, respectively. Conclusions Results suggest that the most effective embedded DS index might be a new one, which we term Digit Span Raw. Cross-validation of these findings could provide support for using this index instead of the more commonly examined RDS.


2020 ◽  
Vol 35 (6) ◽  
pp. 1006-1006
Author(s):  
Chan C ◽  
Roth J ◽  
Roberts J ◽  
Getz G

Abstract Objective Correlation between the performance validity measures from the Advanced Clinical Solution (ACS) package and select scales from the Personality Assessment Inventory (PAI) were examined to investigate relationships between performance effort and response styles involving negative impression (NIM), inconsistency (ICN), infrequency (INF), and somatic compliant (SOM) scales. Theoretical differences between symptom validity based on self-report questionnaires and credibility of performance on a performance validity tests (PVTs) were considered. Method Archival data from clinical neuropsychological evaluations between the years 2015–2018 were reviewed, and 120 consecutive adult cases involving 68 males and 52 females between the ages of 19–69 were collected. Examined measures included: ACS—Word Choice Test (WCT), Reliable Digit Span, Visual Reproduction II Recognition, Logical Memory II Recognition, and the PAI. Spearman’s rank-order correlation and point-biserial correlation analyses were conducted to examine the relationship between variables. Results WCT had significant negative correlations with the NIM scale (rs(118) = −.203, p = .013). Modest correlations were found between SOM and WCT when examining PVT raw score (rs (118) = −.192, p = .018) and base rate performance (rs(118) = −.222, p = .007). Point-biserial serial analysis found a weak negative correlation between performance credibility and the SOM scale, which was statistically significant (rpb = −.221, n = 120, p = .008). Conclusions NIM T-scores appear to be correlated with lower performance on the WCT, suggesting that an exaggerated or distorted impression of the self is associated with higher risk of poorer performance the stand-alone PVT. Correlation between embedded PVTs and PAI scales were inconsistent.


2020 ◽  
Vol 35 (6) ◽  
pp. 1002-1002
Author(s):  
Sheikh K ◽  
Peck C

Abstract Objective Prior studies have examined indices within the Brief Visuospatial Memory Test—Revised (BVMT-R) as potential embedded performance validity tests (PVT). Findings from these studies, however, are limited and with mixed results. Therefore, the purpose of the current study was to compare the classification accuracy of the Hartford Consistency Index (HCI) with published BVMT-R performance validity measures in an outpatient sample. Method A total of 115 archival files met study inclusion criteria: a) ≥ 18 years-old; b) administered > 2 PVTs (Reliable Digit Span, Dot Counting Test, and Test of Memory Malingering); and c) no diagnoses of intellectual disability or dementia. Utilizing standard cutoffs, participants were classified as ‘Valid’ (n = 94) or ‘Invalid’ (n = 21). ‘Valid’ profiles passed all PVTs and were free of known external incentives while ‘Invalid’ profiles failed ≥2 PVTs. Results An HCI cutoff of < 1 yielded 90% specificity, 48% sensitivity, and the area under the curve (AUC = .70) was adequate. Applying published cutoffs for Recognition Hits (≤4) and Percent Retention (≤58%) to our sample produced > 90% specificity, but sensitivity rates were < 40% and AUCs were consistently < .70. Similarly, the Recognition Discrimination (≤4) cutoff revealed inadequate specificity (84%), but acceptable sensitivity (63%), and AUC (.73). Conclussions Results from our study support the use of the HCI as an embedded PVT within the BVMT-R for non-demented outpatient samples. Furthermore, the HCI outperformed other embedded PVTs examined. Limitations of our study and future directions are discussed.


2020 ◽  
pp. 108705472096457
Author(s):  
Daniel J. White ◽  
Gabriel P. Ovsiew ◽  
Tasha Rhoads ◽  
Zachary J. Resch ◽  
Mary Lee ◽  
...  

Objective: This study examined concordance between symptom and performance validity among clinically-referred patients undergoing neuropsychological evaluation for Attention-Deficit/Hyperactivity Disorder (ADHD). Method: Data from 203 patients who completed the WAIS-IV Working Memory Index, the Clinical Assessment of Attention Deficit-Adult (CAT-A), and ≥4 criterion performance validity tests (PVTs) were analyzed. Results: Symptom and performance validity were concordant in 76% of cases, with the majority being valid performance. Of the remaining 24% of cases with divergent validity findings, patients were more likely to exhibit symptom invalidity (15%) than performance invalidity (9%). Patients demonstrating symptom invalidity endorsed significantly more ADHD symptoms than those with credible symptom reporting ( ηp2 = .06–.15), but comparable working memory test performance, whereas patients with performance invalidity had significantly worse working memory performance than those with valid PVT performance ( ηp2 = .18). Conclusion: Symptom and performance invalidity represent dissociable constructs in patients undergoing neuropsychological evaluation of ADHD and should be evaluated independently.


Assessment ◽  
2020 ◽  
pp. 107319112092909
Author(s):  
Joshua I. Pliskin ◽  
Samantha DeDios Stern ◽  
Zachary J. Resch ◽  
Kevin F. Saladino ◽  
Gabriel P. Ovsiew ◽  
...  

This cross-sectional study evaluated eight embedded performance validity tests (PVTs) previously derived from the Rey Auditory Verbal Learning Test (RAVLT), Wechsler Memory Scale–IV–Logical Memory (LM), and Brief Visuospatial Memory Test–Revised (BVMT-R) recognition trials among a single mixed clinical sample of 108 neuropsychiatric patients (83 valid/25 invalid) with ( n = 54) and without ( n = 29) mild neurocognitive disorder. Among the overall sample, all eight recognition PVTs significantly differentiated valid from invalid performance (areas under the curve [AUCs] = .64-.81) with 26% to 44% sensitivity (≥89% specificity) at optimal cut-scores depending on the specific PVT. After subdividing the sample by cognitive impairment status, all eight PVTs continued to reliably identify invalid performance (AUC = .68-.91) with markedly increased sensitivities of 56% to 80% (≥89% specificity) in the unimpaired group. In contrast, among those with mild neurocognitive disorder, RAVLT False Positives and LM became nonsignificant, whereas the other six PVTs remained significant (AUC = .64-.77), albeit with reduced sensitivities of 32% to 44% (≥89% specificity) at optimal cut-scores. Taken together, results cross-validated BVMT-R and most RAVLT recognition indices as effective embedded PVTs for identifying invalid neuropsychological test performance with diverse populations including examinees with and without suspected mild neurocognitive disorder, whereas LM had more limited utility as an embedded PVT, particularly when mild neurocognitive disorder was present.


2020 ◽  
Vol 35 (6) ◽  
pp. 999-999
Author(s):  
Martinez K ◽  
Sayers C ◽  
Clark C ◽  
Schroeder R

Abstract Objective Studies have indicated that nonclinical participants in neuropsychological research do not always perform validly on testing (e.g., An, Zakzanis, & Joordens, 2012). As such, we cross-validated a brief yet well-researched performance validity test, the Dot Counting Test (DCT), in a validly performing nonclinical sample. Method Participants were 50 college students (mean age = 19.92; mean education = 14.10) who completed a neuropsychological test battery under the instruction to provide their best effort on all tests. Freestanding performance validity tests included the Test of Memory Malingering (TOMM) and DCT. To ensure that only valid participants were included in the study, it was required that participants pass all examined TOMM validity indices (i.e., Trial 1, Trial 2, Retention, Albany Consistency Index, and Invalid Forgetting Frequency Index; no participant failed any of these indices). Results The first DCT E-score cutoff at which 90% specificity was obtained was > 13. At a cutoff of > 17 (a previously established clinical group cutoff), 98% specificity was obtained. At a cutoff of > 21, 100% specificity was obtained. Conclusions Results cross-validate the DCT for use in a nonclinical sample. Multiple cutoffs are reported, along with corresponding specificity rates. Researchers can now choose the cutoff, which corresponds to their desired specificity rate, to use in nonclinical research studies to help ensure that invalidly performing participants are excluded from future research.


Assessment ◽  
2019 ◽  
Vol 28 (1) ◽  
pp. 186-198 ◽  
Author(s):  
Julia C. Daugherty ◽  
Luis Querido ◽  
Nathalia Quiroz ◽  
Diana Wang ◽  
Natalia Hidalgo-Ruzzante ◽  
...  

The number of computerized and reliable performance validity tests are scarce. This study aims to address this issue by validating a free and computerized performance validity test: the Coin in Hand–Extended Version (CIH-EV). The CIH-EV test was administered in four countries (Colombia, Spain, Portugal, and the United States) and performance was compared with other commonly used validated tests. Results showed that the CIH-EV has at least 95% specificity and 62% sensitivity, and performance was highly correlated with scores on the Test of Memory Malingering, Victoria Symptom Validity Test, and Digit Span of the Wechsler Adult Intelligence Scale. There were no significant differences in scores across countries, suggesting that the CIH-EV performs similarly in a variety of cultures. Our findings suggest that the CIH-EV has the potential to serve as a valid validity test either alone or as a supplement to other commonly used validity tests.


2019 ◽  
Vol 34 (6) ◽  
pp. 1097-1097
Author(s):  
K McIntyre

Abstract Objective Effort testing with children has started to gain traction in the literature. The following case presents data from several number recall tasks similar to the Wechsler Digit Span and may expand the opportunities for embedded performance validity tests (PVTs). Method For the purpose of psychoeducational testing, a comprehensive battery of standardized neuropsychological tests was administered, including visual and auditory attention, language, visual-motor and fine motor skills, visual and auditory processing, executive functioning, and academics. Results The child showed impaired performance on Number Recall from the Kaufman Assessment Battery for Children, 2nd Edition (KABC-II) and Memory for Digits on the Comprehensive Test of Phonological Processing, 2nd Edition (CTOPP-2) that met the Weschler Digit Span cutoff indicative of poor effort. Evidence of Ganser symptoms (e.g., nearly correct or approximate answers) was present on math calculation performance on the Woodcock Johnson Tests of Achievement, 4th Edition (WJTA-IV). Further, apparently deliberate markings of “X” solely on his incorrect responses on a dichotomous (yes/no) reading task also suggested deliberate feigning of low reading skills. Conclusions This case highlights the importance of effort testing in children, especially as poor effort was not apparent to the examiner, nor did there appear to be any obvious gain. Comparing data on tasks similar to already established PVTs may help expand opportunities to test effort systematically and frequently throughout a neuropsychological evaluation, and has implications for other professionals (e.g., School Psychologists, Speech Language Pathologists, etc.) who evaluate children.


2014 ◽  
Author(s):  
Douglas Mossman ◽  
William Miller ◽  
Elliot Lee ◽  
Roger Gervais ◽  
Kathleen Hart ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document