scholarly journals Performance validity test failure in clinical populations—a systematic review

2020 ◽  
Vol 91 (9) ◽  
pp. 945-952 ◽  
Author(s):  
Laura McWhirter ◽  
Craig W Ritchie ◽  
Jon Stone ◽  
Alan Carson

Performance validity tests (PVTs) are widely used in attempts to quantify effort and/or detect negative response bias during neuropsychological testing. However, it can be challenging to interpret the meaning of poor PVT performance in a clinical context. Compensation-seeking populations predominate in the PVT literature. We aimed to establish base rates of PVT failure in clinical populations without known external motivation to underperform. We searched MEDLINE, EMBASE and PsycINFO for studies reporting PVT failure rates in adults with defined clinical diagnoses, excluding studies of active or veteran military personnel, forensic populations or studies of participants known to be litigating or seeking disability benefits. Results were summarised by diagnostic group and implications discussed. Our review identified 69 studies, and 45 different PVTs or indices, in clinical populations with intellectual disability, degenerative brain disease, brain injury, psychiatric disorders, functional disorders and epilepsy. Various pass/fail cut-off scores were described. PVT failure was common in all clinical groups described, with failure rates for some groups and tests exceeding 25%. PVT failure is common across a range of clinical conditions, even in the absence of obvious incentive to underperform. Failure rates are no higher in functional disorders than in other clinical conditions. As PVT failure indicates invalidity of other attempted neuropsychological tests, the finding of frequent and unexpected failure in a range of clinical conditions raises important questions about the degree of objectivity afforded to neuropsychological tests in clinical practice and research.

2019 ◽  
Vol 34 (6) ◽  
pp. 915-915
Author(s):  
B Huber ◽  
R Jones ◽  
S Capps ◽  
E Buchanan

Abstract The Memory Complaints Inventory (MCI) is a self-report questionnaire developed as a symptom validity test to assess exaggerated memory complaints and to compliment performance validity tests in neuropsychological settings. Objective The current study utilized archival MCI scores in clinical (Alzheimer’s Disease, Vascular Dementia, Mild Cognitive Impairment, and Pseudodementia) and insufficient effort populations to determine clinical cutoff scores for utilization of the MCI in neuropsychological settings. Method Data were gathered from the archives of an outpatient neuropsychology clinic based on diagnosis, resulting in a total of 244 participants for inclusion. Participants were subsequently separated into clinical (n = 195) or insufficient effort (n = 49) groups based on diagnosis. Data were analyzed using Receiving Operator Characteristic (ROC) curve analyses, consisting of area under the curve (AUC), specificity, sensitivity, diagnostic odds ratios (DOR), and Youden’s indices. Results Data suggested a cutoff of 10.35 (42% endorsement) for the Overall score of the MCI to differentiate clinical populations from insufficient effort with acceptable sensitivity (55%) and specificity (90%). Further, cutoff values for each scale of the MCI were calculated, including the Plausible and Implausible imbedded validity scales. Conclusions The findings provided further evidence for the use of the MCI as a symptom validity measure. The identified cutoff scores can differentiate between insufficient effort and clinical populations with acceptable specificity and sensitivity to aid clinician accuracy in detecting insufficient effort and non-credible symptom endorsement.


2017 ◽  
Vol 10 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Laszlo A. Erdodi ◽  
Shayna Nussbaum ◽  
Sanya Sagar ◽  
Christopher A. Abeare ◽  
Eben S. Schwartz

2014 ◽  
Author(s):  
Douglas Mossman ◽  
William Miller ◽  
Elliot Lee ◽  
Roger Gervais ◽  
Kathleen Hart ◽  
...  

Author(s):  
Andrew DaCosta ◽  
Frank Webbe ◽  
Anthony LoGalbo

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)’s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected. Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs. Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only two individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19,225.000, p < .001; U = 17,859.000, p < .001; U = 13,854.000, p < .001; U = 17,850.500, p < .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


2020 ◽  
Vol 35 (6) ◽  
pp. 1020-1020
Author(s):  
Dacosta A ◽  
Roccaforte A ◽  
Sohoni R ◽  
Crane A ◽  
Webbe F ◽  
...  

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)‘s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected (Gaudet & Weyandt, 2017). Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs (Gaudet & Weyandt, 2017). Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only 2 individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19225.000, p < .001; U = 17859.000, p < .001; U = 13854.000, p < .001; U = 17850.500, p < .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


Author(s):  
Daniel L. Drane ◽  
Dona E. C. Locke

This chapter covers what is known about the possible mechanisms of neurocognitive dysfunction in patients with psychogenic nonepileptic seizures (PNES). It begins with a review of all research examining possible cognitive deficits in this population. Cognitive research in PNES is often obscured by noise created by a host of comorbid conditions (e.g., depression, post-traumatic stress disorder, chronic pain) and associated issues (e.g., effects of medications and psychological processes that can compromise attention or broader cognition). More recent studies employing performance validity tests raise the possibility that studies finding broad cognitive problems in PNES may be highlighting a more transient phenomenon secondary to these comorbid or secondary factors. Such dysfunction would likely improve with successful management of PNES symptomatology, yet the effects of even transient variability likely compromises daily function until these issues are resolved. Future research must combine the use of neuropsychological testing, performance validity measures, psychological theory, neuroimaging analysis, and a thorough understanding of brain–behavior relationships to address whether there is a focal neuropathological syndrome associated with PNES.


2020 ◽  
Vol 26 (10) ◽  
pp. 1028-1035 ◽  
Author(s):  
Rachel Galioto ◽  
Kaltra Dhima ◽  
Ophira Berenholz ◽  
Robyn Busch

AbstractObjective:Performance validity tests (PVTs) are designed to detect nonvalid responding on neuropsychological testing, but their associations with disease-specific and other factors are not well understood in multiple sclerosis (MS). We examined PVT performance among MS patients and associations with clinical characteristics, cognition, mood, and disability status.Method:Retrospective data analysis was conducted on a sample of patients with definite MS (n = 102) who were seen for a clinical neuropsychological evaluation. Comparison samples included patients with intractable epilepsy seen for presurgical workup (n = 102) and patients with nonacute mild traumatic brain injury (mTBI; n = 50). Patients completed the Victoria Symptom Validity Test (VSVT) and validity cutoffs were defined as <16/24 and <18/24 on the hard items.Results:In this MS cohort, 14.4% of patients scored <16 on the VSVT hard items and 21.2% scored <18. VSVT hard item scores were associated with disability status and depression, but not with neuropsychological scores, T2 lesion burden, atrophy, disease duration, or MS subtype. Patients applying for disability benefits were 6.75 times more likely to score <18 relative to those who were not seeking disability. Rates of nonvalid scores were similar to the mTBI group and greater than the epilepsy group.Conclusions:This study demonstrates that nonvalid VSVT scores are relatively common among MS patients seen for clinical neuropsychological evaluation. VSVT performance in this group relates primarily to disability status and psychological symptoms and does not reflect factors specific to MS (i.e., cognitive impairment, disease severity). Recommendations for future clinical and research practices are provided.


2019 ◽  
Vol 34 (6) ◽  
pp. 835-835
Author(s):  
D Olsen ◽  
R Schroeder ◽  
P Martin

Abstract Objective A p-value of < .05 has traditionally been utilized to determine below chance performance on forced-choice performance validity tests (PVT). Recently, Binder and colleagues (2014 & 2018) proposed that the p-value cutoff increase to < .20. To ensure this does not result in frequent false-positive errors in patients who are likely to have significant cognitive impairment, frequency of below chance scores at both p-values were examined within the context of possible dementia. Method Archival data of cognitively impaired inpatient (n = 55; mean RBANS Total Score = 64.67) and outpatient (n = 203; mean RBANS Total Score = 74.15) older adults without external incentives were examined to determine frequency of below chance performances on the Coin-in-the-Hand Test. To supplement this data and examine below chance performance on a second PVT, the authors reviewed empirical literature and extracted data on TOMM performance in individuals with dementia. Four studies (n = 269 patients) provided data that could be extracted. Results No patient produced a Coin-in-the-Hand Test score (0/258 individuals) reaching either p value cutoff. Similarly, no patient produced a TOMM Trial 2 (0/121 individuals) or Retention score (0/84 individuals) reaching either p value cutoff. For TOMM Trial 1, no patient (0/44) scored at p < .05 but two patients (2/64) scored at p < .20. Conclusions No individual in this study produced scores on either PVT reaching the p < .05 cutoff. At the p < .20 cutoff, there were only 2 out of 527 performances (0.4%) that reached this threshold; both of which were observed on TOMM Trial 1. This data supports the recommendation that p < .20 be used when determining below chance performance.


Sign in / Sign up

Export Citation Format

Share Document