Limited English Proficiency Increases Failure Rates on Performance Validity Tests with High Verbal Mediation

2017 ◽  
Vol 10 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Laszlo A. Erdodi ◽  
Shayna Nussbaum ◽  
Sanya Sagar ◽  
Christopher A. Abeare ◽  
Eben S. Schwartz
2020 ◽  
Vol 91 (9) ◽  
pp. 945-952 ◽  
Author(s):  
Laura McWhirter ◽  
Craig W Ritchie ◽  
Jon Stone ◽  
Alan Carson

Performance validity tests (PVTs) are widely used in attempts to quantify effort and/or detect negative response bias during neuropsychological testing. However, it can be challenging to interpret the meaning of poor PVT performance in a clinical context. Compensation-seeking populations predominate in the PVT literature. We aimed to establish base rates of PVT failure in clinical populations without known external motivation to underperform. We searched MEDLINE, EMBASE and PsycINFO for studies reporting PVT failure rates in adults with defined clinical diagnoses, excluding studies of active or veteran military personnel, forensic populations or studies of participants known to be litigating or seeking disability benefits. Results were summarised by diagnostic group and implications discussed. Our review identified 69 studies, and 45 different PVTs or indices, in clinical populations with intellectual disability, degenerative brain disease, brain injury, psychiatric disorders, functional disorders and epilepsy. Various pass/fail cut-off scores were described. PVT failure was common in all clinical groups described, with failure rates for some groups and tests exceeding 25%. PVT failure is common across a range of clinical conditions, even in the absence of obvious incentive to underperform. Failure rates are no higher in functional disorders than in other clinical conditions. As PVT failure indicates invalidity of other attempted neuropsychological tests, the finding of frequent and unexpected failure in a range of clinical conditions raises important questions about the degree of objectivity afforded to neuropsychological tests in clinical practice and research.


2014 ◽  
Author(s):  
Douglas Mossman ◽  
William Miller ◽  
Elliot Lee ◽  
Roger Gervais ◽  
Kathleen Hart ◽  
...  

Author(s):  
Andrew DaCosta ◽  
Frank Webbe ◽  
Anthony LoGalbo

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)’s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected. Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs. Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only two individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19,225.000, p < .001; U = 17,859.000, p < .001; U = 13,854.000, p < .001; U = 17,850.500, p < .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


2020 ◽  
Vol 35 (6) ◽  
pp. 1020-1020
Author(s):  
Dacosta A ◽  
Roccaforte A ◽  
Sohoni R ◽  
Crane A ◽  
Webbe F ◽  
...  

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)‘s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected (Gaudet & Weyandt, 2017). Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs (Gaudet & Weyandt, 2017). Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only 2 individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19225.000, p < .001; U = 17859.000, p < .001; U = 13854.000, p < .001; U = 17850.500, p < .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


2019 ◽  
Vol 34 (6) ◽  
pp. 835-835
Author(s):  
D Olsen ◽  
R Schroeder ◽  
P Martin

Abstract Objective A p-value of < .05 has traditionally been utilized to determine below chance performance on forced-choice performance validity tests (PVT). Recently, Binder and colleagues (2014 & 2018) proposed that the p-value cutoff increase to < .20. To ensure this does not result in frequent false-positive errors in patients who are likely to have significant cognitive impairment, frequency of below chance scores at both p-values were examined within the context of possible dementia. Method Archival data of cognitively impaired inpatient (n = 55; mean RBANS Total Score = 64.67) and outpatient (n = 203; mean RBANS Total Score = 74.15) older adults without external incentives were examined to determine frequency of below chance performances on the Coin-in-the-Hand Test. To supplement this data and examine below chance performance on a second PVT, the authors reviewed empirical literature and extracted data on TOMM performance in individuals with dementia. Four studies (n = 269 patients) provided data that could be extracted. Results No patient produced a Coin-in-the-Hand Test score (0/258 individuals) reaching either p value cutoff. Similarly, no patient produced a TOMM Trial 2 (0/121 individuals) or Retention score (0/84 individuals) reaching either p value cutoff. For TOMM Trial 1, no patient (0/44) scored at p < .05 but two patients (2/64) scored at p < .20. Conclusions No individual in this study produced scores on either PVT reaching the p < .05 cutoff. At the p < .20 cutoff, there were only 2 out of 527 performances (0.4%) that reached this threshold; both of which were observed on TOMM Trial 1. This data supports the recommendation that p < .20 be used when determining below chance performance.


2020 ◽  
Vol 35 (6) ◽  
pp. 1004-1004
Author(s):  
Smith A ◽  
Thomas J ◽  
Friedhoff C ◽  
Chin E

Abstract Objective In concussion populations, suboptimal effort on performance validity tests (PVTs) has been associated with poorer neuropsychological scores and greater post-concussive complaints. This study examined if performance on TOMM Trial 1 was associated with increased cognitive deficits, post-concussive symptoms, and emotional concerns in a pediatric concussion population. Method This study utilized archival data from 93 patients (mean age = 14.56, SD = 2.01) with a history of concussion who were assessed at approximately 40 days post-injury. Individuals were divided into “Pass” and “Fail” groups based on their TOMM Trial 1 performance using the established cut-off. The testing battery included Auditory Consonant Trigrams, CPT-II and III, HVLT-R, WJ-III and IV, ImPACT, BASC-2, and BRIEF. Results The overall pass rate on Trial 1 was 70% (mean = 46.04, SD = 4.55). There were no significant correlations with Trial 1 and age, grade, gender, prior history of concussion, or mechanism of injury. The Fail group scored lower across domains of attention, memory, and processing speed when compared to the Pass group (p &lt; .05), though their performances were largely average. On rating scales, more concerns were endorsed with the Fail group for attention and executive functioning relative to the Pass group (p &lt; .05), though their scores were below clinical levels. The Fail group reported more post-concussive complaints (p &lt; .05) but they did not significantly differ from the Pass group in terms of depressive symptoms, anxiety, or somatization. Conclusions This study highlights the importance of utilizing PVTs when evaluating concussion recovery in pediatric patients.


2021 ◽  
Author(s):  
J. Cobb Scott ◽  
Tyler M. Moore ◽  
David R Roalf ◽  
Theodore D. Satterthwaite ◽  
Daniel H. Wolf ◽  
...  

Objective: Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric modeling from data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). Method: We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n=9,498); and 2) adult servicemembers from the Marine Resiliency Study-II (n=1,444). Results: Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. Conclusion: These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.


2021 ◽  
Vol 11 (8) ◽  
pp. 1039
Author(s):  
Elad Omer ◽  
Yoram Braw

Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants’ performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering—TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators’ objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.


Sign in / Sign up

Export Citation Format

Share Document