C - 43Analysis of Five Empirically-Derived Methods of Utilizing the Test of Memory Malingering (TOMM) Relative to Three Other Performance Validity Measures

2018 ◽  
Vol 33 (6) ◽  
pp. 703-794
Author(s):  
S Farmer ◽  
J Lynch ◽  
R McCaffrey
2018 ◽  
Vol 30 (3) ◽  
pp. 410-415 ◽  
Author(s):  
Roger O. Gervais ◽  
Anthony M. Tarescavage ◽  
Manfred F. Greiffenstein ◽  
Dustin B. Wygant ◽  
Cheryl Deslauriers ◽  
...  

2020 ◽  
Vol 35 (5) ◽  
pp. 562-575
Author(s):  
Erin Sullivan-Baca ◽  
Kara Naylon ◽  
Andrea Zartman ◽  
Barry Ardolf ◽  
J Gregory Westhafer

Abstract Objective The number of women veterans seeking Veterans Health Administration services has substantially increased over the past decade. Neuropsychology remains an understudied area in the examination of gender differences. The present study sought to delineate similarities and differences in men and women veterans presenting for neuropsychological evaluation in terms of demographics, referral, medical conditions, effort, and outcome diagnosis. Method A database collected from an outpatient VA neuropsychology clinic from 2013 to 2019 was analyzed (n = 232 women, 2642 men). Additional analyses examined younger (n = 836 men, 155 women) and older (n = 1805 men, 77 women) age cohorts. Results Women veterans were younger and more educated than men, whereas men had higher prevalence of vascular risk factors. Both groups were most often referred from mental health clinics and memory was the most common referral question. Although men performed worse on performance validity measures, clinicians rated women as evidencing poorer effort on a cumulative rating based on formal and embedded performance validity measures, behavioral observations, and inconsistent test patterns. Older women reported more depressive symptoms than older men and were more commonly diagnosed with depression. Conclusions This exploratory study fills a gap in the understanding of gender differences in veterans presenting for neuropsychological evaluations. Findings emphasize consideration for the intersection of gender with demographics, medical factors, effort, and psychological symptoms by VA neuropsychologists. A better understanding of relationships between gender and these factors may inform neuropsychologists’ test selection, interpretation of behavioral observations, and diagnostic considerations to best treat women veterans.


2021 ◽  
Author(s):  
J. Cobb Scott ◽  
Tyler M. Moore ◽  
David R Roalf ◽  
Theodore D. Satterthwaite ◽  
Daniel H. Wolf ◽  
...  

Objective: Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric modeling from data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). Method: We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n=9,498); and 2) adult servicemembers from the Marine Resiliency Study-II (n=1,444). Results: Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. Conclusion: These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.


Sign in / Sign up

Export Citation Format

Share Document