scholarly journals A-228 Are Those Who Fail Performance Validity Measures High Utilizers of Healthcare?

2020 ◽  
Vol 35 (6) ◽  
pp. 1023-1023
Author(s):  
Cooper C ◽  
Trahan E ◽  
Muncy C ◽  
Higa J ◽  
Link J ◽  
...  

Abstract Objective One study reported that suboptimal effort on performance validity tests (PVTs) is associated with higher healthcare utilization within a VA setting, defined as the number of Emergency Department visits and inpatient hospitalizations. The current study sought to expand on this by examining whether PVT failure is associated with higher number of outpatient visits in a military sample with a history of mild traumatic brain injury (mTBI). Method The medical records of 43 participants, 13 of whom failed the PVT Green’s Word Memory Test (WMT), were reviewed for the number of encounters since the mTBI and the reason for the encounter. The two groups (passed vs. failed) did not differ significantly on demographic variables (39 males, mean age 39, 65% Caucasian). Results The overall number of medical encounters was not significant between the two groups after controlling for years since the mTBI (F(1, 40) = 2.67, p = .11); however, once three participants with (>2 years) missing records were excluded (final n = 40), the PVT failure group was seen significantly more often, (F(1, 37) = 8.23, p = .01). The PVT failures had a higher number of encounters with physical therapy (t(38) = −2.79, p = .01) and orthopedics (t(38) = −2.10, p = .04). Conclusions Preliminary results suggest that suboptimal effort is not associated with higher healthcare utilization; however, when participants with more than two years of missing records were excluded, those who failed PVTs were seen more frequently by physical therapy and orthopedic specialties. Limitations for future investigations are highlighted.

Author(s):  
Andrew DaCosta ◽  
Frank Webbe ◽  
Anthony LoGalbo

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)’s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected. Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs. Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only two individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19,225.000, p < .001; U = 17,859.000, p < .001; U = 13,854.000, p < .001; U = 17,850.500, p < .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


2020 ◽  
Vol 35 (6) ◽  
pp. 1020-1020
Author(s):  
Dacosta A ◽  
Roccaforte A ◽  
Sohoni R ◽  
Crane A ◽  
Webbe F ◽  
...  

Abstract Objective The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)‘s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected (Gaudet & Weyandt, 2017). Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT’s EVMs (Gaudet & Weyandt, 2017). Method College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. Results Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only 2 individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19225.000, p < .001; U = 17859.000, p < .001; U = 13854.000, p < .001; U = 17850.500, p < .001). Conclusions The DCT appears to detect suboptimal effort otherwise undetected by ImPACT’s EVMs.


Author(s):  
Daniel L. Drane ◽  
Dona E. C. Locke

This chapter covers what is known about the possible mechanisms of neurocognitive dysfunction in patients with psychogenic nonepileptic seizures (PNES). It begins with a review of all research examining possible cognitive deficits in this population. Cognitive research in PNES is often obscured by noise created by a host of comorbid conditions (e.g., depression, post-traumatic stress disorder, chronic pain) and associated issues (e.g., effects of medications and psychological processes that can compromise attention or broader cognition). More recent studies employing performance validity tests raise the possibility that studies finding broad cognitive problems in PNES may be highlighting a more transient phenomenon secondary to these comorbid or secondary factors. Such dysfunction would likely improve with successful management of PNES symptomatology, yet the effects of even transient variability likely compromises daily function until these issues are resolved. Future research must combine the use of neuropsychological testing, performance validity measures, psychological theory, neuroimaging analysis, and a thorough understanding of brain–behavior relationships to address whether there is a focal neuropathological syndrome associated with PNES.


2021 ◽  
Author(s):  
J. Cobb Scott ◽  
Tyler M. Moore ◽  
David R Roalf ◽  
Theodore D. Satterthwaite ◽  
Daniel H. Wolf ◽  
...  

Objective: Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric modeling from data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). Method: We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n=9,498); and 2) adult servicemembers from the Marine Resiliency Study-II (n=1,444). Results: Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. Conclusion: These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.


2020 ◽  
Vol 35 (6) ◽  
pp. 1002-1002
Author(s):  
Sheikh K ◽  
Peck C

Abstract Objective Prior studies have examined indices within the Brief Visuospatial Memory Test—Revised (BVMT-R) as potential embedded performance validity tests (PVT). Findings from these studies, however, are limited and with mixed results. Therefore, the purpose of the current study was to compare the classification accuracy of the Hartford Consistency Index (HCI) with published BVMT-R performance validity measures in an outpatient sample. Method A total of 115 archival files met study inclusion criteria: a) ≥ 18 years-old; b) administered > 2 PVTs (Reliable Digit Span, Dot Counting Test, and Test of Memory Malingering); and c) no diagnoses of intellectual disability or dementia. Utilizing standard cutoffs, participants were classified as ‘Valid’ (n = 94) or ‘Invalid’ (n = 21). ‘Valid’ profiles passed all PVTs and were free of known external incentives while ‘Invalid’ profiles failed ≥2 PVTs. Results An HCI cutoff of < 1 yielded 90% specificity, 48% sensitivity, and the area under the curve (AUC = .70) was adequate. Applying published cutoffs for Recognition Hits (≤4) and Percent Retention (≤58%) to our sample produced > 90% specificity, but sensitivity rates were < 40% and AUCs were consistently < .70. Similarly, the Recognition Discrimination (≤4) cutoff revealed inadequate specificity (84%), but acceptable sensitivity (63%), and AUC (.73). Conclussions Results from our study support the use of the HCI as an embedded PVT within the BVMT-R for non-demented outpatient samples. Furthermore, the HCI outperformed other embedded PVTs examined. Limitations of our study and future directions are discussed.


2019 ◽  
Vol 34 (6) ◽  
pp. 938-938
Author(s):  
A McKinstry ◽  
A Salman ◽  
C Schieszler-Ockrassa ◽  
A Wisinger ◽  
D Korinek ◽  
...  

Abstract Objective To determine if individuals referred for Attention Deficit/ Hyperactivity Disorder (ADHD) differential diagnosis, who do and do not fail performance validity tests (PVTs) present themselves differently on self-report measures of executive functioning (Behavioral Rating Inventory of Executive Function; BRIEF) and ADHD (Conner’s Adult ADHD Rating Scales; CAARS). Method A convenience sample of 83 adults referred to an outpatient neuropsychology private practice for neuropsychological assessment for ADHD was collected. MANOVA was performed comparing individuals who passed PVTs (Word Memory Test or WAIS-IV Reliable Digit Span) to individuals who failed PVTs on the Behavioral Regulation Index and Metacognitive Index of the BREIF and Inattention/Memory Problems, Hyperactivity/Restlessness, Impulsivity/Emotional Lability, Problems with Self-Concept, DSM-IV Inattentive Symptoms, DSM-IV Hyperactive-Impulsive Symptoms of the CAARS. Results All statistical comparisons were non-significant at the p = < .05. Conclusions Individuals who fail PVTs are indistinguishable from individuals who pass PVTs on the BRIEF and the CAARS. This is consistent with past research suggesting that validity of self-report cannot be inferred from performance validity testing (Van Dyke, Millis, Axelrod, & Hanks, 2013; Bush, et al., 2005). Also, this data highlights the importance of self-report measures containing their own validated measures of symptom validity.


2020 ◽  
Vol 35 (6) ◽  
pp. 949-949
Author(s):  
Myers M ◽  
Harrell M ◽  
Taylor S ◽  
Beach J ◽  
Aita S ◽  
...  

Abstract Objective The association between feigned Attention-Deficit/Hyperactivity Disorder (ADHD) symptoms and intellectual functioning was examined in a sample of undergraduate students instructed to simulate ADHD. Method 90 undergraduate students completed the Wechsler Adult Intelligence Scale (WAIS-IV), b Test, and Green’s Word Memory Test (WMT) as part of a larger study [mean age 19.23 years (SD 1.67), range 17–26 years old; mean 12.47 years of education (SD .86); 58.9% female; 58.9% Caucasian, 32.2% African American, 8.9% Other]. Intra-individual variability (IIV) was calculated as standard deviation of the overall test battery mean for the 10 core WAIS-IV subtests. Results A moderate association was found between WAIS-IV IIV and b Test E-score (r = .397, p &lt; .05). WAIS-IV IIV was also moderately associated with b Test errors (d errors r = .299, p &lt; .05; commissions r = .284, p &lt; .05; omissions r = .463, p &lt; .01) and completion time (r = .332, p &lt; .05). No significant relationships were found between WAIS-IV IIV and WMT performance. Conclusions Given that IIV within intellectual functioning was correlated with performance on b Test but not WMT, this suggests the variability in objectively measured intelligence for simulators is associated with feigned attentional symptoms but not feigned memory symptoms. These findings implicate detection of malingered symptom presentation for ADHD to be more sensitive in the attentional domain compared to memory. Therefore, performance validity tests assessing attentional abilities may be more applicable in diagnostic settings aimed at detection of ADHD.


2018 ◽  
Vol 30 (3) ◽  
pp. 410-415 ◽  
Author(s):  
Roger O. Gervais ◽  
Anthony M. Tarescavage ◽  
Manfred F. Greiffenstein ◽  
Dustin B. Wygant ◽  
Cheryl Deslauriers ◽  
...  

2014 ◽  
Author(s):  
Douglas Mossman ◽  
William Miller ◽  
Elliot Lee ◽  
Roger Gervais ◽  
Kathleen Hart ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document