Examining independent and combined accuracy of embedded performance validity tests in the California Verbal Learning Test-II and Brief Visuospatial Memory Test-Revised for detecting invalid performance

Author(s):  
Zachary J. Resch ◽  
Amber T. Pham ◽  
Dayna A. Abramson ◽  
Daniel J. White ◽  
Samantha DeDios-Stern ◽  
...  
2020 ◽  
Vol 35 (6) ◽  
pp. 1018-1018
Author(s):  
Arzuyan A ◽  
Mathew A ◽  
Rosenblatt A ◽  
Gracian E ◽  
Osmon D

Abstract Objective The Hopkins Verbal Learning Test–Revised (HVLT-R) and Brief Visuospatial Memory Test-Revised (BVMT-R) are memory tests with embedded measures of performance validity (Recognition Discrimination [RD] and Discrimination Index [DI], respectively). We evaluated whether cognitive ability and age influenced embedded measures of effort. Methods Participants included 30 young adults (YA) and 29 older adults (dichotomized into unimpaired [OAu] and impaired [OAi]). Participants completed a medication management ability assessment (MMAA), daily memory lapses survey (DM), digit span, and the Transverse Patterning (TP) and Reversal Learning (RL) computerized tests. Two Repeated-Measures MANOVAs were conducted to determine if Passing PVT and Age/Cognitive Ability influenced performance. An ROC analysis was conducted for HVLT-RD and BVMT-DI to determine pass/fail, and false positives/negatives on embedded measures. Results Those in the YA group who failed RDS (YA-fail), performed better than OAi-fail and OAi-pass groups on RT Errors (p < .0001). On TP Errors, the YA group differed from all four OA groups (p < .0001). On MMAA a significant difference was observed between OAi-fail and all other groups (p < .001). On RD, YA groups differed from both OAi groups (p = .0008). On DI, the YA groups differed from the OAi-fail group (p = .002). A logistic regression classified 43/57 participants successfully into the three cognitive groups using the six predictors (χ2 = 55.73, p < .0001, R2 = .468). RT Errors and TP were significant (Likelihood χ2 = 7.25, p = .027). Conclusion HVLT-RD failed to detect validity for OAi, as did BVMT-DI for YA and OAu. Instead, impairment effects are seen on HVLT-RD and BVMT-DI where YA groups differed from some combination of both/one of the OA groups.


2020 ◽  
Vol 35 (6) ◽  
pp. 1002-1002
Author(s):  
Sheikh K ◽  
Peck C

Abstract Objective Prior studies have examined indices within the Brief Visuospatial Memory Test—Revised (BVMT-R) as potential embedded performance validity tests (PVT). Findings from these studies, however, are limited and with mixed results. Therefore, the purpose of the current study was to compare the classification accuracy of the Hartford Consistency Index (HCI) with published BVMT-R performance validity measures in an outpatient sample. Method A total of 115 archival files met study inclusion criteria: a) ≥ 18 years-old; b) administered > 2 PVTs (Reliable Digit Span, Dot Counting Test, and Test of Memory Malingering); and c) no diagnoses of intellectual disability or dementia. Utilizing standard cutoffs, participants were classified as ‘Valid’ (n = 94) or ‘Invalid’ (n = 21). ‘Valid’ profiles passed all PVTs and were free of known external incentives while ‘Invalid’ profiles failed ≥2 PVTs. Results An HCI cutoff of < 1 yielded 90% specificity, 48% sensitivity, and the area under the curve (AUC = .70) was adequate. Applying published cutoffs for Recognition Hits (≤4) and Percent Retention (≤58%) to our sample produced > 90% specificity, but sensitivity rates were < 40% and AUCs were consistently < .70. Similarly, the Recognition Discrimination (≤4) cutoff revealed inadequate specificity (84%), but acceptable sensitivity (63%), and AUC (.73). Conclussions Results from our study support the use of the HCI as an embedded PVT within the BVMT-R for non-demented outpatient samples. Furthermore, the HCI outperformed other embedded PVTs examined. Limitations of our study and future directions are discussed.


2021 ◽  
Vol 11 (6) ◽  
pp. 800
Author(s):  
Harriet A. Ball ◽  
Marta Swirski ◽  
Margaret Newson ◽  
Elizabeth J. Coulthard ◽  
Catherine M. Pennington

Functional cognitive disorder (FCD) is a relatively common cause of cognitive symptoms, characterised by inconsistency between symptoms and observed or self-reported cognitive functioning. We aimed to improve the clinical characterisation of FCD, in particular its differentiation from early neurodegeneration. Two patient cohorts were recruited from a UK-based tertiary cognitive clinic, diagnosed following clinical assessment, investigation and expert multidisciplinary team review: FCD, (n = 21), and neurodegenerative Mild Cognitive Impairment (nMCI, n = 17). We separately recruited a healthy control group (n = 25). All participants completed an assessment battery including: Hopkins Verbal Learning Test-Revised (HVLT-R), Trail Making Test Part B (TMT-B); Depression Anxiety and Stress Scale (DASS) and Minnesota Multiphasic Personality Inventory (MMPI-2RF). In comparison to healthy controls, the FCD and nMCI groups were equally impaired on trail making, immediate recall, and recognition tasks; had equally elevated mood symptoms; showed similar aberration on a range of personality measures; and had similar difficulties on inbuilt performance validity tests. However, participants with FCD performed significantly better than nMCI on HVLT-R delayed free recall and retention (regression coefficient −10.34, p = 0.01). Mood, personality and certain cognitive abilities were similarly altered across nMCI and FCD groups. However, those with FCD displayed spared delayed recall and retention, in comparison to impaired immediate recall and recognition. This pattern, which is distinct from that seen in prodromal neurodegeneration, is a marker of internal inconsistency. Differentiating FCD from nMCI is challenging, and the identification of positive neuropsychometric features of FCD is an important contribution to this emerging area of cognitive neurology.


2021 ◽  
Vol 11 (8) ◽  
pp. 1039
Author(s):  
Elad Omer ◽  
Yoram Braw

Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants’ performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering—TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators’ objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.


2011 ◽  
Author(s):  
Jacques Arrieux ◽  
Robert L. Stegman ◽  
Wesley R. Cole ◽  
Leila Rodriguez ◽  
Mary A. Dale ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document