scholarly journals The Multi-Level Pattern Memory Test (MPMT): Initial Validation of a Novel Performance Validity Test

2021 ◽  
Vol 11 (8) ◽  
pp. 1039
Author(s):  
Elad Omer ◽  
Yoram Braw

Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants’ performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering—TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators’ objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.

2020 ◽  
Vol 35 (6) ◽  
pp. 1002-1002
Author(s):  
Sheikh K ◽  
Peck C

Abstract Objective Prior studies have examined indices within the Brief Visuospatial Memory Test—Revised (BVMT-R) as potential embedded performance validity tests (PVT). Findings from these studies, however, are limited and with mixed results. Therefore, the purpose of the current study was to compare the classification accuracy of the Hartford Consistency Index (HCI) with published BVMT-R performance validity measures in an outpatient sample. Method A total of 115 archival files met study inclusion criteria: a) ≥ 18 years-old; b) administered > 2 PVTs (Reliable Digit Span, Dot Counting Test, and Test of Memory Malingering); and c) no diagnoses of intellectual disability or dementia. Utilizing standard cutoffs, participants were classified as ‘Valid’ (n = 94) or ‘Invalid’ (n = 21). ‘Valid’ profiles passed all PVTs and were free of known external incentives while ‘Invalid’ profiles failed ≥2 PVTs. Results An HCI cutoff of < 1 yielded 90% specificity, 48% sensitivity, and the area under the curve (AUC = .70) was adequate. Applying published cutoffs for Recognition Hits (≤4) and Percent Retention (≤58%) to our sample produced > 90% specificity, but sensitivity rates were < 40% and AUCs were consistently < .70. Similarly, the Recognition Discrimination (≤4) cutoff revealed inadequate specificity (84%), but acceptable sensitivity (63%), and AUC (.73). Conclussions Results from our study support the use of the HCI as an embedded PVT within the BVMT-R for non-demented outpatient samples. Furthermore, the HCI outperformed other embedded PVTs examined. Limitations of our study and future directions are discussed.


2019 ◽  
Vol 34 (6) ◽  
pp. 931-931
Author(s):  
L Greenberg ◽  
V Merrit ◽  
P Arnett

Abstract Objective In the context of sports-related concussion (SRC) evaluations, athletes have been shown to “sandbag” their baseline testing in order to improve their chances of return-to-play post-concussion. To circumvent this problem, performance validity tests are often administered. The ImPACT, a widely used computerized program in SRC evaluations, has five embedded validity indices (VIs); however, indications of their use as measures of effort have not been well established. With this in mind, we aimed to compare performance on the ImPACT VIs between athletes and non-athlete controls at baseline. Given the incentive to “sandbag” by at least some players, it was hypothesized that athletes would demonstrate poorer performance on all VIs than controls. Method Participants included 1,254 college students (70% male; 77.3% Caucasian) divided into two groups: athletes (n = 929) and non-athlete controls (n = 325). All participants completed the ImPACT individually. Primary outcomes of interest included the five ImPACT VIs: Impulse Control Composite, X’s and O’s Total Incorrect, Word Memory Learning Percent Correct, Design Memory Learning Percent Correct, and Three Letters Total Letters Correct. Results Independent samples t-tests revealed that athletes performed worse than controls on 4 of the 5 VIs (p = < .001 to .028; d = 0.13 to 0.23). The only VI that was not significantly different between groups was Three Letters (p>.05, d = 0.11). Conclusion Consistent with our hypotheses, findings generally showed that athletes demonstrated worse performance on the ImPACT VIs compared to non-athlete controls. Although future research is needed to validate the utility of the VIs, our results suggest that these scores may be useful in detecting suboptimal baseline performance.


2014 ◽  
Author(s):  
Douglas Mossman ◽  
William Miller ◽  
Elliot Lee ◽  
Roger Gervais ◽  
Kathleen Hart ◽  
...  

Author(s):  
Andrew DaCosta ◽  
Frank Webbe ◽  
Anthony LoGalbo

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)’s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected. Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs. Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only two individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19,225.000, p &lt; .001; U = 17,859.000, p &lt; .001; U = 13,854.000, p &lt; .001; U = 17,850.500, p &lt; .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


2020 ◽  
Vol 35 (6) ◽  
pp. 1020-1020
Author(s):  
Dacosta A ◽  
Roccaforte A ◽  
Sohoni R ◽  
Crane A ◽  
Webbe F ◽  
...  

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)‘s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected (Gaudet & Weyandt, 2017). Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs (Gaudet & Weyandt, 2017). Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69&#37;) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only 2 individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19225.000, p &lt; .001; U = 17859.000, p &lt; .001; U = 13854.000, p &lt; .001; U = 17850.500, p &lt; .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


Sign in / Sign up

Export Citation Format

Share Document