scholarly journals C-52Ethnic Differences in the Detection of Feigned Cognitive Symptoms: A Comparison of Neuropsychological Assessment Performance Validity Tests between African-Americans and Caucasians

2015 ◽  
Vol 30 (6) ◽  
pp. 581.2-581
Author(s):  
E Hood ◽  
D Boone ◽  
D Miora
2021 ◽  
Vol 13 (4) ◽  
pp. 477-486
Author(s):  
John W. Lace ◽  
Zachary C. Merz ◽  
Rachel Galioto

Within the neuropsychological assessment, clinicians are responsible for ensuring the validity of obtained cognitive data. As such, increased attention is being paid to performance validity in patients with multiple sclerosis (pwMS). Experts have proposed batteries of neuropsychological tests for use in this population, though none contain recommendations for standalone performance validity tests (PVTs). The California Verbal Learning Test, Second Edition (CVLT-II) and Brief Visuospatial Memory Test, Revised (BVMT-R)—both of which are included in the aforementioned recommended neuropsychological batteries—include previously validated embedded PVTs (which offer some advantages, including expedience and reduced costs), with no prior work exploring their utility in pwMS. The purpose of the present study was to determine the potential clinical utility of embedded PVTs to detect the signal of non-credibility as operationally defined by below criterion standalone PVT performance. One hundred thirty-three (133) patients (M age = 48.28; 76.7% women; 85.0% White) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into “credible” (n = 100) or “noncredible” (n = 33) groups based on a standalone PVT criterion. Classification statistics for four CVLT-II and BVMT-R PVTs of interest in isolation were poor (AUCs = 0.58–0.62). Several arithmetic and logistic regression-derived multivariate formulas were calculated, all of which similarly demonstrated poor discriminability (AUCs = 0.61–0.64). Although embedded PVTs may arguably maximize efficiency and minimize test burden in pwMS, common ones in the CVLT-II and BVMT-R may not be psychometrically appropriate, sufficiently sensitive, nor substitutable for standalone PVTs in this population. Clinical neuropsychologists who evaluate such patients are encouraged to include standalone PVTs in their assessment batteries to ensure that clinical care conclusions drawn from neuropsychological data are valid.


2014 ◽  
Author(s):  
Douglas Mossman ◽  
William Miller ◽  
Elliot Lee ◽  
Roger Gervais ◽  
Kathleen Hart ◽  
...  

Author(s):  
Andrew DaCosta ◽  
Frank Webbe ◽  
Anthony LoGalbo

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)’s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected. Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs. Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only two individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19,225.000, p < .001; U = 17,859.000, p < .001; U = 13,854.000, p < .001; U = 17,850.500, p < .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


2020 ◽  
Vol 35 (6) ◽  
pp. 1020-1020
Author(s):  
Dacosta A ◽  
Roccaforte A ◽  
Sohoni R ◽  
Crane A ◽  
Webbe F ◽  
...  

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)‘s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected (Gaudet & Weyandt, 2017). Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs (Gaudet & Weyandt, 2017). Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only 2 individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19225.000, p < .001; U = 17859.000, p < .001; U = 13854.000, p < .001; U = 17850.500, p < .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


2019 ◽  
Vol 34 (6) ◽  
pp. 835-835
Author(s):  
D Olsen ◽  
R Schroeder ◽  
P Martin

Abstract Objective A p-value of < .05 has traditionally been utilized to determine below chance performance on forced-choice performance validity tests (PVT). Recently, Binder and colleagues (2014 & 2018) proposed that the p-value cutoff increase to < .20. To ensure this does not result in frequent false-positive errors in patients who are likely to have significant cognitive impairment, frequency of below chance scores at both p-values were examined within the context of possible dementia. Method Archival data of cognitively impaired inpatient (n = 55; mean RBANS Total Score = 64.67) and outpatient (n = 203; mean RBANS Total Score = 74.15) older adults without external incentives were examined to determine frequency of below chance performances on the Coin-in-the-Hand Test. To supplement this data and examine below chance performance on a second PVT, the authors reviewed empirical literature and extracted data on TOMM performance in individuals with dementia. Four studies (n = 269 patients) provided data that could be extracted. Results No patient produced a Coin-in-the-Hand Test score (0/258 individuals) reaching either p value cutoff. Similarly, no patient produced a TOMM Trial 2 (0/121 individuals) or Retention score (0/84 individuals) reaching either p value cutoff. For TOMM Trial 1, no patient (0/44) scored at p < .05 but two patients (2/64) scored at p < .20. Conclusions No individual in this study produced scores on either PVT reaching the p < .05 cutoff. At the p < .20 cutoff, there were only 2 out of 527 performances (0.4%) that reached this threshold; both of which were observed on TOMM Trial 1. This data supports the recommendation that p < .20 be used when determining below chance performance.


2020 ◽  
Vol 35 (6) ◽  
pp. 1004-1004
Author(s):  
Smith A ◽  
Thomas J ◽  
Friedhoff C ◽  
Chin E

Abstract Objective In concussion populations, suboptimal effort on performance validity tests (PVTs) has been associated with poorer neuropsychological scores and greater post-concussive complaints. This study examined if performance on TOMM Trial 1 was associated with increased cognitive deficits, post-concussive symptoms, and emotional concerns in a pediatric concussion population. Method This study utilized archival data from 93 patients (mean age = 14.56, SD = 2.01) with a history of concussion who were assessed at approximately 40 days post-injury. Individuals were divided into “Pass” and “Fail” groups based on their TOMM Trial 1 performance using the established cut-off. The testing battery included Auditory Consonant Trigrams, CPT-II and III, HVLT-R, WJ-III and IV, ImPACT, BASC-2, and BRIEF. Results The overall pass rate on Trial 1 was 70% (mean = 46.04, SD = 4.55). There were no significant correlations with Trial 1 and age, grade, gender, prior history of concussion, or mechanism of injury. The Fail group scored lower across domains of attention, memory, and processing speed when compared to the Pass group (p &lt; .05), though their performances were largely average. On rating scales, more concerns were endorsed with the Fail group for attention and executive functioning relative to the Pass group (p &lt; .05), though their scores were below clinical levels. The Fail group reported more post-concussive complaints (p &lt; .05) but they did not significantly differ from the Pass group in terms of depressive symptoms, anxiety, or somatization. Conclusions This study highlights the importance of utilizing PVTs when evaluating concussion recovery in pediatric patients.


2021 ◽  
Author(s):  
J. Cobb Scott ◽  
Tyler M. Moore ◽  
David R Roalf ◽  
Theodore D. Satterthwaite ◽  
Daniel H. Wolf ◽  
...  

Objective: Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric modeling from data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). Method: We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n=9,498); and 2) adult servicemembers from the Marine Resiliency Study-II (n=1,444). Results: Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. Conclusion: These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.


2021 ◽  
Vol 11 (8) ◽  
pp. 1039
Author(s):  
Elad Omer ◽  
Yoram Braw

Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants’ performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering—TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators’ objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.


Sign in / Sign up

Export Citation Format

Share Document