Performance validity in undergraduate research participants: a comparison of failure rates across tests and cutoffs

2016 ◽  
Vol 31 (1) ◽  
pp. 193-206 ◽  
Author(s):  
Kelly Y. An ◽  
Kristen Kaploun ◽  
Laszlo A. Erdodi ◽  
Christopher A. Abeare
2019 ◽  
Vol 33 (6) ◽  
pp. 1138-1155 ◽  
Author(s):  
Scott Roye ◽  
Matthew Calamia ◽  
John P. K. Bernstein ◽  
Alyssa N. De Vito ◽  
Benjamin D. Hill

2021 ◽  
Vol 36 (6) ◽  
pp. 1034-1034
Author(s):  
Jeremy Davis ◽  
Summer Rolin ◽  
Gabrielle Hromas

Abstract Objective Embedded performance validity tests (PVTs) may show increased false positive rates in racially diverse examinees. This study examined false positive rates by race in an older adult sample. Method The project involved secondary analysis of a deidentified dataset (N = 22,688) from the National Alzheimer’s Coordinating Center (NACC). Participants were included if their identified race was African American or white. Exclusion criteria included diagnosis of mild cognitive impairment (MCI; n = 5160) or dementia (n = 5550). The initial sample included 11,114 participants grouped as cognitively normal (89.9%) or impaired but not MCI of whom 16.4% identified as African American. Propensity score matching was conducted by diagnostic group to match African American and white participants on age, education, gender, and MMSE score. The final sample included 3024 and 482 participants in normal and impaired groups, respectively, with 50% of participants identifying as African American in each group. Failure rates on five embedded PVTs in the NACC cognitive test battery were examined by race and by diagnosis. Results Age, education, gender, and MMSE score were not significantly different by race in either diagnostic group. In the normal group, 4.7% of African American and 1.9% of white participants failed two or more PVTs (p < 0.001). In the impaired group, 9.5% of African American and 5.8% of white participants failed two or more PVTs (n.s.). Conclusions PVT failure rates were significantly higher among African American participants in the normal group but not in the impaired group. Failure rates remained below a common false positive threshold of 10%.


2021 ◽  
Vol 36 (6) ◽  
pp. 1161-1161
Author(s):  
Sarah Saravia ◽  
Daniel W Lopez-Hernandez ◽  
Abril J Baez ◽  
Isabel Muñoz ◽  
Winter Olmos ◽  
...  

Abstract Objective The Dot Counting Test (DCT) is a performance validity test. McCaul et al. (2018) recently revised the DCT cut-off score from ≥17 to 13.80; we evaluated the new cut-off in non-Latinx Caucasian and Caucasian Latinx traumatic brain injury (TBI) survivors and healthy comparison (HC) participants. Method The sample consisted of 37 acute TBI (ATBI; 11 Caucasian Latinx; 26 non-Latinx Caucasian), 27 chronic TBI (CTBI; 10 Caucasian Latinx; 17 non-Latinx Caucasian), and 55 HC (29 Caucasian Latinx; 26 non-Latinx Caucasian) participants. Results An ANCOVA, controlling for age, revealed no DCT E-scores differences between groups. Both the conventional and the new cut-off scores had different failure rates in ATBI (conventional cut-off: 0%; PNC: 16%), CTBI (conventional cut-off: 7%; PNC: 15%), and HC (conventional cut-off: 10%; PNC: 11%) participants. For the Caucasian Latinx group (conventional cut-off: 6%; PNC: 12%) and the non-Latinx Caucasian group (conventional cut-off: 6%; PNC: 14%), demonstrated different failure rates across cut-off scores. Group differences were found with the McCaul et al. (2018) cut-off and the conventional cut-off. Also, chi-squared analysis revealed non-Latinx Caucasian participants with ATBI had greater failure rates than Caucasian Latinx participants with ATBI. Conclusion The new DCT cut-off score resulted in greater failure rates in TBI survivors. Also, this effect appears to be most pronounced in non-Latinx Caucasian persons with ATBI. Future work should investigate possible reasons for these differences so that more stringent DCT can be utilized in a way that provides less biased results for brain injury survivors across racial and ethnic groups.


2011 ◽  
Author(s):  
Megan J. Freeman ◽  
Andrew J. Freeman ◽  
Anna Van Meter ◽  
Alex Holdaway ◽  
Eric A. Youngstrom

2021 ◽  
Vol 36 (6) ◽  
pp. 1239-1239
Author(s):  
Jeremy Davis ◽  
Gabrielle Hromas ◽  
Summer Rolin

Abstract Objective Classification accuracy of embedded performance validity tests (PVTs) is unknown in cases involving bilingual examinees evaluated in English. This study examined false positive rates in bilingual individuals in an older adult sample. Method The project involved secondary analysis of a deidentified dataset (N = 22,688) from the National Alzheimer’s Coordinating Center (NACC). Exclusion criteria were diagnosis of mild cognitive impairment (MCI; n = 5160) or dementia (n = 5550). The initial sample included 11,513 participants grouped as cognitively normal (89.6%) or impaired but not MCI. A subset of 275 participants was identified with a primary language other than English who were evaluated in English. Propensity score matching was conducted by diagnostic group to match bilingual to monolingual participants on age, education, gender, and MMSE score. The final sample included 450 and 100 participants in normal and impaired groups, respectively. Failure rates on five embedded PVTs in the NACC cognitive test battery were examined by language and by diagnosis. Results Age, education, gender, and MMSE score were not significantly different by language in either diagnostic group. In the normal group, 4.9% of bilingual and 2.2% of monolingual participants failed two or more PVTs (n.s.). In the impaired group, 12% of bilingual and 6% of monolingual participants failed two or more PVTs (n.s.). Conclusions PVT failure rates were not significantly different between bilingual participants evaluated in English and monolingual participants in either diagnostic group. Failure rates, however, increased slightly above a common false positive threshold of 10% in bilingual participants in the impaired group.


2015 ◽  
Vol 31 (1) ◽  
pp. 97-104 ◽  
Author(s):  
Thomas P. Ross ◽  
Ashley M. Poston ◽  
Patricia A. Rein ◽  
Andrew N. Salvatore ◽  
Nathan L. Wills ◽  
...  

2017 ◽  
Vol 10 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Laszlo A. Erdodi ◽  
Shayna Nussbaum ◽  
Sanya Sagar ◽  
Christopher A. Abeare ◽  
Eben S. Schwartz

2016 ◽  
Vol 22 (8) ◽  
pp. 851-858 ◽  
Author(s):  
Eben S. Schwartz ◽  
Laszlo Erdodi ◽  
Nicholas Rodriguez ◽  
Jyotsna J. Ghosh ◽  
Joshua R. Curtain ◽  
...  

AbstractObjectives: The Forced Choice Recognition (FCR) trial of the California Verbal Learning Test, 2nd edition, was designed as an embedded performance validity test (PVT). To our knowledge, this is the first systematic review of classification accuracy against reference PVTs. Methods: Results from peer-reviewed studies with FCR data published since 2002 encompassing a variety of clinical, research, and forensic samples were summarized, including 37 studies with FCR failure rates (N=7575) and 17 with concordance rates with established PVTs (N=4432). Results: All healthy controls scored >14 on FCR. On average, 16.9% of the entire sample scored ≤14, while 25.9% failed reference PVTs. Presence or absence of external incentives to appear impaired (as identified by researchers) resulted in different failure rates (13.6% vs. 3.5%), as did failing or passing reference PVTs (49.0% vs. 6.4%). FCR ≤14 produced an overall classification accuracy of 72%, demonstrating higher specificity (.93) than sensitivity (.50) to invalid performance. Failure rates increased with the severity of cognitive impairment. Conclusions: In the absence of serious neurocognitive disorder, FCR ≤14 is highly specific, but only moderately sensitive to invalid responding. Passing FCR does not rule out a non-credible presentation, but failing FCR rules it in with high accuracy. The heterogeneity in sample characteristics and reference PVTs, as well as the quality of the criterion measure across studies, is a major limitation of this review and the basic methodology of PVT research in general. (JINS, 2016, 22, 851–858)


2014 ◽  
Vol 29 (5) ◽  
pp. 415-421 ◽  
Author(s):  
G. M. Silk-Eglit ◽  
J. H. Stenclik ◽  
B. E. Gavett ◽  
J. W. Adams ◽  
J. K. Lynch ◽  
...  

2021 ◽  
Vol 36 (6) ◽  
pp. 1162-1162
Author(s):  
Isabel Munoz ◽  
Daniel W Lopez-Hernandez ◽  
Rachel A Rugh-Fraser ◽  
Amy Bichlmeier ◽  
Abril J Baez ◽  
...  

Abstract Objective Research shows that traumatic brain injury (TBI) patients perform worse than healthy comparisons (HC) on the Symbol Digit Modalities Test (SDMT). We evaluated cut-off scores for a newly developed recognition trial of the SDMT as a performance validity assessment in monolingual and bilingual TBI survivors and HC adults. Method The sample consisted of 43 acute TBI (ATBI; 24 monolinguals; 19 bilinguals), 32 chronic TBI (CTBI; 13 monolinguals; 19 bilinguals), and 57 HC (24 monolinguals; 33 bilinguals) participants. All participants received standardized administration of the SDMT. None of the participants displayed motivation for feigning cognitive deficits. Results The HC group outperformed both TBI groups on the demographically adjusted SDMT scores, p = 0.000, ηp2 = 0.24. An interaction emerged in SDMT scores where monolingual ATBI outperformed bilingual ATBI and bilingual CTBI outperformed monolingual CTBI, p = 0.017, ηp2 = 0.06. No differences were found in the SDMT recognition trial. Both Bichlmeier and Boone’s suggested cut-off scores had different failure rates in ATBI (Bichlmeier: 77%; Boone: 37%), CTBI (Bichlmeier: 69%; Boone: 19%), and HC (Bichlmeier: 56%; Boone: 26%). For the monolingual group (Bichlmeier: 66%; Boone: 36%) and the bilingual group (Bichlmeier: 66%; Boone: 21%). Finally, chi-squared analysis revealed monolingual TBI had greater failure rates than the bilingual ATBI. Conclusion Bichlmeier’s proposed cut-off score resulted in greater failure rates in TBI survivors compared to Boone’s suggested cut-off score. Furthermore, monolingual ATBI were influenced more by Bichlmeier’s cut-off score than the bilingual ATBI group, although the reason for this finding is unclear and requires additional study with a larger sample size.


Sign in / Sign up

Export Citation Format

Share Document