performance validity tests
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 34)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Vol 13 (4) ◽  
pp. 477-486
Author(s):  
John W. Lace ◽  
Zachary C. Merz ◽  
Rachel Galioto

Within the neuropsychological assessment, clinicians are responsible for ensuring the validity of obtained cognitive data. As such, increased attention is being paid to performance validity in patients with multiple sclerosis (pwMS). Experts have proposed batteries of neuropsychological tests for use in this population, though none contain recommendations for standalone performance validity tests (PVTs). The California Verbal Learning Test, Second Edition (CVLT-II) and Brief Visuospatial Memory Test, Revised (BVMT-R)—both of which are included in the aforementioned recommended neuropsychological batteries—include previously validated embedded PVTs (which offer some advantages, including expedience and reduced costs), with no prior work exploring their utility in pwMS. The purpose of the present study was to determine the potential clinical utility of embedded PVTs to detect the signal of non-credibility as operationally defined by below criterion standalone PVT performance. One hundred thirty-three (133) patients (M age = 48.28; 76.7% women; 85.0% White) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into “credible” (n = 100) or “noncredible” (n = 33) groups based on a standalone PVT criterion. Classification statistics for four CVLT-II and BVMT-R PVTs of interest in isolation were poor (AUCs = 0.58–0.62). Several arithmetic and logistic regression-derived multivariate formulas were calculated, all of which similarly demonstrated poor discriminability (AUCs = 0.61–0.64). Although embedded PVTs may arguably maximize efficiency and minimize test burden in pwMS, common ones in the CVLT-II and BVMT-R may not be psychometrically appropriate, sufficiently sensitive, nor substitutable for standalone PVTs in this population. Clinical neuropsychologists who evaluate such patients are encouraged to include standalone PVTs in their assessment batteries to ensure that clinical care conclusions drawn from neuropsychological data are valid.


Author(s):  
Cynthia Aguilar ◽  
Cassandra Bailey ◽  
Kenny A. Karyadi ◽  
Dominique I. Kinney ◽  
Stephen R. Nitch

2021 ◽  
Vol 36 (6) ◽  
pp. 1236-1236
Author(s):  
Hyun Jin Kang ◽  
Michelle Kim ◽  
Karen Torres

Abstract Objective Factors specific to an epilepsy population (e.g., medications, psychiatric comorbidities, localization) may result in higher false positive rates on performance validity tests (PVT), rendering the results more equivocal. This study examined whether specificity is reduced in effortful epilepsy patients on the Warrington Recognition Memory Test - Words (WRMT-W) and Test of Memory Malingering (TOMM). Method 53 epilepsy patients referred for neuropsychological evaluation through the University of Washington Regional Epilepsy Center were examined. Patients were majority male (56.6%) and Caucasian (79.2%). Average age and education were 36.1 (SD = 13.03) and 13.4 years (SD = 2.39), respectively. Patients with an intelligence quotient of <70, history of brain surgery, and those who seized during testing were excluded. Patients clinically observed to have reduced effort with 2+ PVT failures were excluded (n = 3). Frequency tables for WRMT-W and TOMM performances were utilized to examine specificity based on prior cutoffs identified for these measures. Results The WRMT-W cutoff of ≤42 was associated with 88.7% specificity. TOMM Trial 2 and Retention cutoffs of <45 were associated with 98.1% and 100% specificity, respectively. The WRMT-W cutoff was associated with 91.7% specificity in language dominant hemisphere onset epilepsy patients (n = 16). None performed below cutoffs on the TOMM. All nondominant hemisphere onset patients (n = 8) performed above WRMT-W and TOMM cutoffs. Conclusions Use of the WRMT-W and TOMM in an epilepsy population is associated with an acceptable false positive rate (specificity around 90%). However, future studies examining the sensitivity of these measures in epilepsy patients should be performed.


2021 ◽  
Vol 11 (8) ◽  
pp. 1039
Author(s):  
Elad Omer ◽  
Yoram Braw

Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants’ performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering—TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators’ objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.


2021 ◽  
Author(s):  
J. Cobb Scott ◽  
Tyler M. Moore ◽  
David R Roalf ◽  
Theodore D. Satterthwaite ◽  
Daniel H. Wolf ◽  
...  

Objective: Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric modeling from data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). Method: We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n=9,498); and 2) adult servicemembers from the Marine Resiliency Study-II (n=1,444). Results: Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. Conclusion: These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.


Author(s):  
Justin C Koenitzer ◽  
Janice E Herron ◽  
Jesse W Whitlow ◽  
Catherine M Barbuscak ◽  
Nitin R Patel ◽  
...  

Abstract Objective Performance validity tests (PVTs) are an integral component of neuropsychological assessment. There is a need for the development of more PVTs, especially those employing covert determinations. The aim of the present study was to provide initial validation of a new computerized PVT, the Perceptual Assessment of Memory (PASSOM). Method Participants were 58 undergraduate students randomly assigned to a simulator (SIM) or control (CON) group. All participants were provided written instructions for their role prior to testing and were administered the PASSOM as part of a brief battery of neurocognitive tests. Indices of interest included response accuracy for Trials 1 and 2, and total errors across Trials, as well as response time (RT) for Trials 1 and 2, and total RT for both Trials. Results The SIM group produced significantly more errors than the CON group for Trials 1 and 2, and committed more total errors across trials. Significantly longer response latencies were found for the SIM group compared to the CON group for all RT indices examined. Linear regression modeling indicated excellent group classification for all indices studied, with areas under the curve ranging from 0.92 to 0.95. Sensitivity and specificity rates were good for several cut scores across all of the accuracy and RT indices, and sensitivity improved greatly by combining RT cut scores with the more traditional accuracy cut scores. Conclusion Findings demonstrate the ability of the PASSOM to distinguish individuals instructed to feign cognitive impairment from those told to perform to the best of their ability.


Sign in / Sign up

Export Citation Format

Share Document