Sensitivity and Specificity of Various Digit Span Scores in the Detection of Suspect Effort

2006 ◽  
Vol 20 (1) ◽  
pp. 145-159 ◽  
Author(s):  
Talin Babikian ◽  
Kyle Brauer Boone ◽  
Po Lu ◽  
Ginger Arnold
2005 ◽  
Vol 19 (1) ◽  
pp. 105-120 ◽  
Author(s):  
Ginger Arnold ◽  
Kyle Brauer Boone ◽  
Po Lu ◽  
Andy Dean ◽  
Johnny Wen ◽  
...  

2002 ◽  
Vol 17 (7) ◽  
pp. 625-642 ◽  
Author(s):  
K. B. Boone ◽  
P. Lu ◽  
C. Back ◽  
C. King ◽  
A. Lee ◽  
...  

2020 ◽  
Vol 35 (6) ◽  
pp. 1008-1008
Author(s):  
Livingstone J ◽  
Reese C

Abstract Objective The purpose of the present study was to compare Wechsler Adult Intelligence Scale-IV (WAIS-IV) Reliable Digit Span (RDS) and Digit Span Age-Corrected Scaled Score (DS-ACSS) sensitivity and specificity, when the effort criterion was determined by between one and five performance validity test (PVT) cut scores. Method Data were collected from 82 adults (18–49) referred for clinical questions of multiple sclerosis, mild traumatic brain injury, and attention deficit hyperactivity disorder. Patients were administered full neuropsychological batteries,with different combinations of PVTs (including Advanced Clinical Solutions Word Choice, Animals raw score, Trails A T-score, Wisconsin Card Sorting Test [WCST; Suhr & Boyer] equation, and California Verbal Learning Test-II Forced Choice). Chi-square and receiver operating characteristic (ROC) analyses were utilized. Results Using established RDS (≤7) and DS-ACSS (≤6) cut scores, specificity was highest (90&37; and 86%, respectively), with equivalent sensitivity (90%), when effort was determined by WCST (Suhr & Boyer) equation failure alone. Related area under the curve for RDS was .90 (CI = .76–1.0) and for DS-ACSS was .88 (CI = .74–1.0). Conclusions In this clinical sample, the highest sensitivity and specificity were observed when the RDS cut score was utilized, and effort was based on the WCST criterion. However, the DS-ACSS cut score resulted in strong sensitivity/specificity combinations across more effort classification groups.


2014 ◽  
Vol 20 (4) ◽  
pp. 7 ◽  
Author(s):  
Suvira Ramlall ◽  
Jennifer Chipps ◽  
Ahmed I Bhigjee ◽  
Basil J Pillay

<p><strong>Background. </strong>Neuropsychological tests can successfully distinguish between healthy elderly persons and those with clinically significant cognitive impairment. </p><p><strong>Objectives. </strong>A battery of neuropsychological tests was evaluated for their discrimination validity of cognitive impairment in a group of elderly persons in Durban, South Africa. </p><p><strong>Method. </strong>A sample of 117 English-speaking participants of different race groups (9 with dementia, 30 with mild cognitive impairment (MCI) and 78 controls) from a group of residential homes for the elderly was administered a battery of 11 neuropsychological tests. Kruskal-Wallis independent sample tests were used to compare performance of tests in the groups. Sensitivity and specificity of the tests for dementia and MCI were determined using random operating curve (ROC) analysis. </p><p><strong>Results. </strong>Most tests were able to discriminate between participants with dementia or MCI, and controls (<em>p</em>&lt;0.05). Area under the curve (AUC) values for dementia v. non-dementia participants ranged from 0.519 for the digit span (forward) to 0.828 for the digit symbol (90 s), with 14 of the 29 test scores achieving significance (<em>p</em>&lt;0.05). AUC values for MCI participants ranged from 0.754 for controlled oral word association test (COWAT) Animal to 0.507 for the Rey complex figure test copy, with 17 of the 29 scores achieving significance (<em>p</em>&lt;0.05). </p><p><strong>Conclusions. </strong>Several measures from the neuropsychological battery had discrimination validity for the differential diagnosis of cognitive disturbances in the elderly. Further studies are needed to assess the effect of culture and language on the appropriateness of the tests for different populations.</p>


Assessment ◽  
2005 ◽  
Vol 12 (2) ◽  
pp. 130-136 ◽  
Author(s):  
Joseph L. Etherton ◽  
Kevin J. Bianchini ◽  
Kevin W. Greve ◽  
Matthew T. Heinly

2020 ◽  
Vol 63 (6) ◽  
pp. 1916-1932 ◽  
Author(s):  
Haiying Yuan ◽  
Christine Dollaghan

Purpose No diagnostic tools exist for identifying social (pragmatic) communication disorder (SPCD), a new Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition category for individuals with social communication deficits but not the repetitive, restricted behaviors and interests (RRBIs) that would qualify them for a diagnosis of autism spectrum disorder (ASD). We explored the value of items from a widely used screening measure of ASD for distinguishing SPCD from typical controls (TC; Aim 1) and from ASD (Aim 2). Method We applied item response theory (IRT) modeling to Social Communication Questionnaire–Lifetime ( Rutter, Bailey, & Lord, 2003 ) records available in the National Database for Autism Research. We defined records from putative SPCD ( n = 54), ASD ( n = 278), and TC ( n = 274) groups retrospectively, based on National Database for Autism Research classifications and Autism Diagnostic Interview–Revised responses. After assessing model assumptions, estimating model parameters, and measuring model fit, we identified items in the social communication and RRBI domains that were maximally informative in differentiating the groups. Results IRT modeling identified a set of seven social communication items that distinguished SPCD from TC with sensitivity and specificity > 80%. A set of five RRBI items was less successful in distinguishing SPCD from ASD (sensitivity and specificity < 70%). Conclusion The IRT modeling approach and the Social Communication Questionnaire–Lifetime item sets it identified may be useful in efforts to construct screening and diagnostic measures for SPCD.


2001 ◽  
Vol 120 (5) ◽  
pp. A395-A395
Author(s):  
J WEST ◽  
A LLOYD ◽  
P HILL ◽  
G HOLMES

Sign in / Sign up

Export Citation Format

Share Document