scholarly journals Multidimensional Malingering Criteria for Neuropsychological Assessment: A 20-Year Update of the Malingered Neuropsychological Dysfunction Criteria

2020 ◽  
Vol 35 (6) ◽  
pp. 735-764 ◽  
Author(s):  
Elisabeth M S Sherman ◽  
Daniel J Slick ◽  
Grant L Iverson

Abstract Objectives Empirically informed neuropsychological opinion is critical for determining whether cognitive deficits and symptoms are legitimate, particularly in settings where there are significant external incentives for successful malingering. The Slick, Sherman, and Iversion (1999) criteria for malingered neurocognitive dysfunction (MND) are considered a major milestone in the field’s operationalization of neurocognitive malingering and have strongly influenced the development of malingering detection methods, including serving as the criterion of malingering in the validation of several performance validity tests (PVTs) and symptom validity tests (SVTs) (Slick, D.J., Sherman, E.M.S., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13(4), 545–561). However, the MND criteria are long overdue for revision to address advances in malingering research and to address limitations identified by experts in the field. Method The MND criteria were critically reviewed, updated with reference to research on malingering, and expanded to address other forms of malingering pertinent to neuropsychological evaluation such as exaggeration of self-reported somatic and psychiatric symptoms. Results The new proposed criteria simplify diagnostic categories, expand and clarify external incentives, more clearly define the role of compelling inconsistencies, address issues concerning PVTs and SVTs (i.e., number administered, false positives, and redundancy), better define the role of SVTs and of marked discrepancies indicative of malingering, and most importantly, clearly define exclusionary criteria based on the last two decades of research on malingering in neuropsychology. Lastly, the new criteria provide specifiers to better describe clinical presentations for use in neuropsychological assessment. Conclusions The proposed multidimensional malingering criteria that define cognitive, somatic, and psychiatric malingering for use in neuropsychological assessment are presented.


Author(s):  
Daniel L. Drane ◽  
Dona E. C. Locke

This chapter covers what is known about the possible mechanisms of neurocognitive dysfunction in patients with psychogenic nonepileptic seizures (PNES). It begins with a review of all research examining possible cognitive deficits in this population. Cognitive research in PNES is often obscured by noise created by a host of comorbid conditions (e.g., depression, post-traumatic stress disorder, chronic pain) and associated issues (e.g., effects of medications and psychological processes that can compromise attention or broader cognition). More recent studies employing performance validity tests raise the possibility that studies finding broad cognitive problems in PNES may be highlighting a more transient phenomenon secondary to these comorbid or secondary factors. Such dysfunction would likely improve with successful management of PNES symptomatology, yet the effects of even transient variability likely compromises daily function until these issues are resolved. Future research must combine the use of neuropsychological testing, performance validity measures, psychological theory, neuroimaging analysis, and a thorough understanding of brain–behavior relationships to address whether there is a focal neuropathological syndrome associated with PNES.



2019 ◽  
Vol 34 (6) ◽  
pp. 835-835
Author(s):  
D Olsen ◽  
R Schroeder ◽  
P Martin

Abstract Objective A p-value of < .05 has traditionally been utilized to determine below chance performance on forced-choice performance validity tests (PVT). Recently, Binder and colleagues (2014 & 2018) proposed that the p-value cutoff increase to < .20. To ensure this does not result in frequent false-positive errors in patients who are likely to have significant cognitive impairment, frequency of below chance scores at both p-values were examined within the context of possible dementia. Method Archival data of cognitively impaired inpatient (n = 55; mean RBANS Total Score = 64.67) and outpatient (n = 203; mean RBANS Total Score = 74.15) older adults without external incentives were examined to determine frequency of below chance performances on the Coin-in-the-Hand Test. To supplement this data and examine below chance performance on a second PVT, the authors reviewed empirical literature and extracted data on TOMM performance in individuals with dementia. Four studies (n = 269 patients) provided data that could be extracted. Results No patient produced a Coin-in-the-Hand Test score (0/258 individuals) reaching either p value cutoff. Similarly, no patient produced a TOMM Trial 2 (0/121 individuals) or Retention score (0/84 individuals) reaching either p value cutoff. For TOMM Trial 1, no patient (0/44) scored at p < .05 but two patients (2/64) scored at p < .20. Conclusions No individual in this study produced scores on either PVT reaching the p < .05 cutoff. At the p < .20 cutoff, there were only 2 out of 527 performances (0.4%) that reached this threshold; both of which were observed on TOMM Trial 1. This data supports the recommendation that p < .20 be used when determining below chance performance.



2020 ◽  
Vol 35 (6) ◽  
pp. 1002-1002
Author(s):  
Sheikh K ◽  
Peck C

Abstract Objective Prior studies have examined indices within the Brief Visuospatial Memory Test—Revised (BVMT-R) as potential embedded performance validity tests (PVT). Findings from these studies, however, are limited and with mixed results. Therefore, the purpose of the current study was to compare the classification accuracy of the Hartford Consistency Index (HCI) with published BVMT-R performance validity measures in an outpatient sample. Method A total of 115 archival files met study inclusion criteria: a) ≥ 18 years-old; b) administered &gt; 2 PVTs (Reliable Digit Span, Dot Counting Test, and Test of Memory Malingering); and c) no diagnoses of intellectual disability or dementia. Utilizing standard cutoffs, participants were classified as ‘Valid’ (n = 94) or ‘Invalid’ (n = 21). ‘Valid’ profiles passed all PVTs and were free of known external incentives while ‘Invalid’ profiles failed ≥2 PVTs. Results An HCI cutoff of &lt; 1 yielded 90% specificity, 48% sensitivity, and the area under the curve (AUC = .70) was adequate. Applying published cutoffs for Recognition Hits (≤4) and Percent Retention (≤58%) to our sample produced &gt; 90% specificity, but sensitivity rates were &lt; 40% and AUCs were consistently &lt; .70. Similarly, the Recognition Discrimination (≤4) cutoff revealed inadequate specificity (84%), but acceptable sensitivity (63%), and AUC (.73). Conclussions Results from our study support the use of the HCI as an embedded PVT within the BVMT-R for non-demented outpatient samples. Furthermore, the HCI outperformed other embedded PVTs examined. Limitations of our study and future directions are discussed.



2017 ◽  
Vol 28 (2) ◽  
pp. 97-116
Author(s):  
Andrea M. Plohmann ◽  
Max Hurter

Abstract. To determine the prevalence of inauthentic cognitive test results, the data of 455 examinees who had finished at least two performance validity tests (PVTs) were analyzed retrospectively. The PVTs administered were the WMT, MSVT, NV-MSVT, ASTM, BSV, RMT, and RDS. Classification as “definite” or “probable” malingering was done according to the Slick criteria. Sociodemographic variables and diagnoses were described using binary logistic regression. Poor effort in at least two PVTs correlated significantly with education levels, immigration, and origin. Irrespective of education level, the highest risk of definite malingering was found in first-generation migrants. Cervical spine dysfunction, normal cerebral imaging, PTSD, somatoform, and/or depressive disorders also correlated with negative response bias. The probability that psychiatric patients fulfill criteria of probable malingering was higher than in patients with isolated organic mental disorders.



2021 ◽  
Vol 13 (4) ◽  
pp. 477-486
Author(s):  
John W. Lace ◽  
Zachary C. Merz ◽  
Rachel Galioto

Within the neuropsychological assessment, clinicians are responsible for ensuring the validity of obtained cognitive data. As such, increased attention is being paid to performance validity in patients with multiple sclerosis (pwMS). Experts have proposed batteries of neuropsychological tests for use in this population, though none contain recommendations for standalone performance validity tests (PVTs). The California Verbal Learning Test, Second Edition (CVLT-II) and Brief Visuospatial Memory Test, Revised (BVMT-R)—both of which are included in the aforementioned recommended neuropsychological batteries—include previously validated embedded PVTs (which offer some advantages, including expedience and reduced costs), with no prior work exploring their utility in pwMS. The purpose of the present study was to determine the potential clinical utility of embedded PVTs to detect the signal of non-credibility as operationally defined by below criterion standalone PVT performance. One hundred thirty-three (133) patients (M age = 48.28; 76.7% women; 85.0% White) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into “credible” (n = 100) or “noncredible” (n = 33) groups based on a standalone PVT criterion. Classification statistics for four CVLT-II and BVMT-R PVTs of interest in isolation were poor (AUCs = 0.58–0.62). Several arithmetic and logistic regression-derived multivariate formulas were calculated, all of which similarly demonstrated poor discriminability (AUCs = 0.61–0.64). Although embedded PVTs may arguably maximize efficiency and minimize test burden in pwMS, common ones in the CVLT-II and BVMT-R may not be psychometrically appropriate, sufficiently sensitive, nor substitutable for standalone PVTs in this population. Clinical neuropsychologists who evaluate such patients are encouraged to include standalone PVTs in their assessment batteries to ensure that clinical care conclusions drawn from neuropsychological data are valid.



2019 ◽  
Vol 35 (5) ◽  
pp. 712-724 ◽  
Author(s):  
Alfons van Impelen ◽  
Marko Jelicic ◽  
Henry Otgaar ◽  
Harald Merckelbach

Abstract. Schretlen’s Malingering Scale Vocabulary and Abstraction test (MSVA) differs from the majority of performance validity tests in that it focuses on the detection of feigned impairments in semantic knowledge and perceptual reasoning rather than feigned memory problems. We administered the MSVA to children ( n = 41), forensic inpatients with intellectual disability ( n = 25), forensic inpatients with psychiatric symptoms ( n = 57), and three groups of undergraduate students ( n = 30, n = 79, and n = 90, respectively), asking approximately half of each of these samples to feign impairment and the other half to respond genuinely. With cutpoints chosen so as to keep false-positive rates below 10%, detection rates of experimentally feigned cognitive impairment were high in children (90%) and inpatients with intellectual disability (100%), but low in adults without intellectual disability (46%). The rates of significantly below-chance performance were low (4%), except in children (47%) and intellectually disabled inpatients (50%). The reliability of the MSVA was excellent (Cronbach’s α = .93–.97) and the MSVA proved robust against coaching (i.e., informed attempts to evade detection while feigning). We conclude that the MSVA is not ready yet for clinical use, but that it shows sufficient promise to warrant further validation efforts.



2021 ◽  
pp. 1-35
Author(s):  
Laszlo A. Erdodi

OBJECTIVE: This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD: Archival data were collected from 167 patients (52.4%male; M Age = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS: MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS: Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.





Sign in / Sign up

Export Citation Format

Share Document