Evidence-based diagnostic accuracy measurement in urine cytology using likelihood ratios

Author(s):  
Nickolas Myles ◽  
Manon Auger ◽  
Yonca Kanber ◽  
Derin Caglar ◽  
Wassim Kassouf ◽  
...  
Author(s):  
Ling-Yu Guo ◽  
Phyllis Schneider ◽  
William Harrison

Purpose This study provided reference data and examined psychometric properties for clausal density (CD; i.e., number of clauses per utterance) in children between ages 4 and 9 years from the database of the Edmonton Narrative Norms Instrument (ENNI). Method Participants in the ENNI database included 300 children with typical language (TL) and 77 children with language impairment (LI) between the ages of 4;0 (years;months) and 9;11. Narrative samples were collected using a story generation task, in which children were asked to tell stories based on six picture sequences. CD was computed from the narrative samples. The split-half reliability, concurrent criterion validity, and diagnostic accuracy were evaluated for CD by age. Results CD scores increased significantly between ages 4 and 9 years in children with TL and those with LI. Children with TL produced higher CD scores than those with LI at each age level. In addition, the correlation coefficients for the split-half reliability and concurrent criterion validity of CD scores were all significant at each age level, with the magnitude ranging from small to large. The diagnostic accuracy of CD scores, as revealed by sensitivity, specificity, and likelihood ratios, was poor. Conclusions The finding on diagnostic accuracy did not support the use of CD for identifying children with LI between ages 4 and 9 years. However, given the attested reliability and validity for CD, reference data of CD from the ENNI database can be used for evaluating children's difficulties with complex syntax and monitoring their change over time. Supplemental Material https://doi.org/10.23641/asha.13172129


VASA ◽  
2016 ◽  
Vol 45 (2) ◽  
pp. 149-154 ◽  
Author(s):  
Jie Li ◽  
Lei Feng ◽  
Jiangbo Li ◽  
Jian Tang

Abstract. Background: The aim of this meta-analysis was to evaluate the diagnostic accuracy of magnetic resonance angiography (MRA) for acute pulmonary embolism (PE). Methods: A systematic literature search was conducted that included studies from January 2000 to August 2015 using the electronic databases PubMed, Embase and Springer link. The summary receiver operating characteristic (SROC) curve, sensitivity, specificity, positive likelihood ratios (PLR), negative likelihood ratios (NLR), and diagnostic odds ratio (DOR) as well as the 95 % confidence intervals (CIs) were calculated to evaluate the diagnostic accuracy of MRA for acute PE. Meta-disc software version 1.4 was used to analyze the data. Results: Five studies were included in this meta-analysis. The pooled sensitivity (86 %, 95 % CI: 81 % to 90 %) and specificity (99 %, 95 % CI: 98 % to 100 %) demonstrated that MRA diagnosis had limited sensitivity and high specificity in the detection of acute PE. The pooled estimate of PLR (41.64, 95 % CI: 17.97 to 96.48) and NLR (0.17, 95 % CI: 0.11 to 0.27) provided evidence for the low missed diagnosis and misdiagnosis rates of MRA for acute PE. The high diagnostic accuracy of MRA for acute PE was demonstrated by the overall DOR (456.51, 95 % CI: 178.38 - 1168.31) and SROC curves (AUC = 0.9902 ± 0.0061). Conclusions: MRA can be used for the diagnosis of acute PE. However, due to limited sensitivity, MRA cannot be used as a stand-alone test to exclude acute PE.


2020 ◽  
Vol 57 (3) ◽  
pp. 316-322
Author(s):  
Rejane MATTAR ◽  
Sergio Barbosa MARQUES ◽  
Maurício Kazuyoshi MINATA ◽  
Joyce Matie Kinoshita da SILVA-ETTO ◽  
Paulo SAKAI ◽  
...  

ABSTRACT BACKGROUND: Rectal bleeding is the most important symptom of intestinal neoplasia; thus, tests of occult blood detection in stools are widely used for pre neoplastic lesions and colorectal cancer (CRC) screening. OBJECTIVE: Evaluate the accuracy of OC-Sensor quantitative test (Eiken Chemical, Tokyo, Japan) at cut-off 10 µg Hb/g feces (50 ng/mL) in a cohort of subjects that had to undergo diagnostic colonoscopy, and if more than one sample collected in consecutive days would improve the diagnostic accuracy of the test. METHODS: Patients (mean age 56.3±9.7 years) that underwent colonoscopy prospectively randomly received one (1-sample FIT, FIT 1) or two (2-sample FIT, FIT 2) collection tubes. They collected the stool sample before starting colonoscopy preparation. Samples were analyzed by the OC-Auto Micro 80 (Eiken Chemical, Tokyo, Japan). The performance of FIT 1 and FIT 2 were compared to the colonoscopy findings. RESULTS: Among 289 patients, CRC was diagnosed in 14 (4.8%), advanced adenoma in 37 (12.8%), early adenoma in 71 (24.6%) and no abnormalities in 141 (48.8%). For FIT 1, the sensitivity for CRC was 83.3% (95%CI 36.5-99.1%), for advanced adenoma was 24% (95%CI 10.1-45.5%), with specificity of 86.9% (95%CI 77.3-92.9%). For FIT 2, the sensitivity for CRC was 75% (95%CI 35.6-95.5%), for advanced adenoma was 50% (95%CI 22.3-77.7%), with specificity of 92.9% (95%CI 82.2-97.7%). The positive likelihood ratios were 1.8 (95%CI 0.7-4.4 for FIT 1) and 7.1 (95%CI 2.4-21.4 for FIT 2) for advanced adenoma, and 6.4 (95%CI 3.3-12.3, for FIT 1) and 10.7 (95%CI 3.8-29.8, for FIT 2) for CRC. The negative likelihood ratio were 0.9 (95%CI 0.7-1, for FIT 1) and 0.5 (95%CI 0.3-0.9, for FIT 2) for advanced adenoma, and 0.2 (0.03-1.1, for FIT 1) and 0.3 (0.08-0.9, for FIT 2) for CRC. The differences between FIT 1 and FIT 2 performances were not significant. However, the comparison of the levels of hemoglobin in feces of patients of FIT 1 and FIT 2 showed that the differences between no polyp group and advanced adenoma and CRC were significant. CONCLUSION: The accuracy of OCR Sensor with 10 µg Hb/g feces cut-off was comparable to other reports and two-sample collection improved the detection rate of advanced adenoma, a pre neoplastic condition to prevent CRC incidence.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Carlos Alfonso Romero-Gameros ◽  
Tania Colin-Martínez ◽  
Salomón Waizel-Haiat ◽  
Guadalupe Vargas-Ortega ◽  
Eduardo Ferat-Osorio ◽  
...  

Abstract Background The SARS-CoV-2 pandemic continues to be a priority health problem; According to the World Health Organization data from October 13, 2020, 37,704,153 confirmed COVID-19 cases have been reported, including 1,079,029 deaths, since the outbreak. The identification of potential symptoms has been reported to be a useful tool for clinical decision-making in emergency departments to avoid overload and improve the quality of care. The aim of this study was to evaluate the performances of symptoms as a diagnostic tool for SARS -CoV-2 infection. Methods An observational, cross-sectional, prospective and analytical study was carried out, during the period of time from April 14 to July 21, 2020. Data (demographic variables, medical history, respiratory and non-respiratory symptoms) were collected by emergency physicians. The diagnosis of COVID-19 was made using SARS-CoV-2 RT-PCR. The diagnostic accuracy of these characteristics for COVID-19 was evaluated by calculating the positive and negative likelihood ratios. A Mantel-Haenszel and multivariate logistic regression analysis was performed to assess the association of symptoms with COVID-19. Results A prevalence of 53.72% of SARS-CoV-2 infection was observed. The symptom with the highest sensitivity was cough 71%, and a specificity of 52.68%. The symptomatological scale, constructed from 6 symptoms, obtained a sensitivity of 83.45% and a specificity of 32.86%, taking ≥2 symptoms as a cut-off point. The symptoms with the greatest association with SARS-CoV-2 were: anosmia odds ratio (OR) 3.2 (95% CI; 2.52–4.17), fever OR 2.98 (95% CI; 2.47–3.58), dyspnea OR 2.9 (95% CI; 2.39–3.51]) and cough OR 2.73 (95% CI: 2.27–3.28). Conclusion The combination of ≥2 symptoms / signs (fever, cough, anosmia, dyspnea and oxygen saturation < 93%, and headache) results in a highly sensitivity model for a quick and accurate diagnosis of COVID-19, and should be used in the absence of ancillary diagnostic studies. Symptomatology, alone and in combination, may be an appropriate strategy to use in the emergency department to guide the behaviors to respond to the disease. Trial registration Institutional registration R-2020-3601-145, Federal Commission for the Protection against Sanitary Risks 17 CI-09-015-034, National Bioethics Commission: 09 CEI-023-2017082.


2016 ◽  
Vol 59 (2) ◽  
pp. 317-329 ◽  
Author(s):  
Ling-Yu Guo ◽  
Phyllis Schneider

Purpose To determine the diagnostic accuracy of the finite verb morphology composite (FVMC), number of errors per C-unit (Errors/CU), and percent grammatical C-units (PGCUs) in differentiating school-aged children with language impairment (LI) and those with typical language development (TL). Method Participants were 61 six-year-olds (50 TL, 11 LI) and 67 eight-year-olds (50 TL, 17 LI). Narrative samples were collected using a story-generation format. FVMC, Errors/CU, and PGCUs were computed from the samples. Results All of the three measures showed acceptable to good diagnostic accuracy at age 6, but only PGCUs showed acceptable diagnostic accuracy at age 8 when sensitivity, specificity, and likelihood ratios were considered. Conclusion FVMC, Errors/CU, and PGCUs can all be used in combination with other tools to identify school-aged children with LI. However, FVMC and Errors/CU may be an appropriate diagnostic tool up to age 6. PGCUs, in contrast, may be a sensitive tool for identifying children with LI at least up to age 8 years.


1997 ◽  
Vol 17 (6) ◽  
pp. 436-439 ◽  
Author(s):  
Elisa Righi ◽  
Giulio Rossi ◽  
Giovanni Ferrari ◽  
Alberto Dotti ◽  
Carmela De Gaetani ◽  
...  

2016 ◽  
Vol 51 (6) ◽  
pp. 498-499 ◽  
Author(s):  
Chelsey M. Toney ◽  
Kenneth E. Games ◽  
Zachary K. Winkelmann ◽  
Lindsey E. Eberman

Reference/Citation: Mugunthan K, Doust J, Kurz B, Glasziou P. Is there sufficient evidence for tuning fork tests in diagnosing fractures? A systematic review. BMJ Open. 2014;4(8):e005238. Clinical Question: Does evidence support the use of tuning-fork tests in the diagnosis of fractures in clinical practice? Data Sources: The authors performed a comprehensive literature search of AMED, CAB Abstracts, CINAHL, EMBASE, MEDLINE, SPORTDiscus, and Web of Science from each database's start to November 2012. In addition, they manually searched reference lists from the initial search result to identify relevant studies. The following key words were used independently or in combination: auscultation, barford test, exp fractures, fracture, tf test, tuning fork. Study Selection: Studies were eligible based on the following criteria: (1) primary studies that assessed the diagnostic accuracy of tuning forks; (2) measured against a recognized reference standard such as magnetic resonance imaging, radiography, or bone scan; and (3) the outcome was reported using pain or reduction of sound. Studies included patients of all ages in all clinical settings with no exclusion for language of publication. Studies were not eligible if they were case series, case-control studies, or narrative review papers. Data Extraction: Potentially eligible studies were independently assessed by 2 researchers. All relevant articles were included and assessed for inclusion criteria and value using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool, and relevant data were extracted. The QUADAS-2 is an updated version of the original QUADAS and focuses on both the risk of bias and applicability of a study through a series of questions. A third researcher was consulted if the 2 initial reviewers did not reach consensus. Data for the primary outcome measure (accuracy of the test) were presented in a 2 × 2 contingency table to show sensitivity and specificity (using the Wilson score method) and positive and negative likelihood ratios with 95% confidence intervals. Main Results: A total of 62 citations were initially identified. Six primary studies (329 patients) were included in the review. The 6 studies assessed the accuracy of 2 tuning-fork test methods (pain induction and reduction of sound transmission). The patients ranged in age from 7 to 84 years. The prevalence of fracture in these patients ranged from 10% to 80% using a reference standard such as magnetic resonance imaging, radiography, or bone scan. The sensitivity of the tuning-fork tests was high, ranging from 75% to 92%. The specificity of the tuning-fork tests had a wide range of 18% to 94%. The positive likelihood ratios ranged from 1.1 to 16.5; the negative likelihood ratios ranged from 0.09 to 0.49. Conclusions: The studies included in this review demonstrated that tuning-fork tests have some value in ruling out fractures. However, strong evidence is lacking to support the use of current tuning-fork tests to rule in a fracture in clinical practice. Similarly, the tuning-fork tests were not statistically accurate in the diagnosis of fractures for widespread clinical use. Despite the lack of strong evidence for diagnosing all fractures, tuning-fork tests may be appropriate in rural and remote settings in which access to the gold standards for diagnosis of fractures is limited.


2021 ◽  
Author(s):  
Nicholas Kevin Erdman ◽  
Patricia M. Kelshaw ◽  
Samantha L. Hacherl ◽  
Shane V. Caswell

Abstract Background: The Child Sport Concussion Assessment Tool 5th Edition (Child SCAT5) was developed to evaluate children between 5-12 years of age for a suspected concussion. However, limited empirical evidence exists demonstrating the value of the Child SCAT5 for acute concussion assessment. Therefore, the purpose of our study was to examine differences and assess the diagnostic properties of Child SCAT5 scores among concussed and non-concussed middle school children on the same day as a suspected concussion.Methods: Our participants included 34 concussed (21 boys, 13 girls; age=12.8±0.86 years) and 44 non-concussed (31 boys, 13 girls; age=12.4±0.76 years) middle school children who were administered the Child SCAT5 upon suspicion of a concussion. Child SCAT5 scores were calculated from the symptom evaluation (total symptoms, total severity), child version of the Standardized Assessment of Concussion (SAC-C), and modified Balance Error Scoring System (mBESS). The Child SCAT5 scores were compared between the concussed and non-concussed groups. Non-parametric effect sizes (r=z/√n) were calculated to assess the magnitude of difference for each comparison. The diagnostic properties (sensitivity, specificity, diagnostic accuracy, predictive values, likelihood ratios, and diagnostic odds ratio) of each Child SCAT5 score were also calculated.Results: Concussed children endorsed more symptoms (p<0.001, r=0.45), higher symptom severity (p<0.001, r=0.44), and had higher double leg (p=0.046, r=0.23), single leg (p=0.035, r=0.24), and total scores (p=0.022, r=0.26) for the mBESS than non-concussed children. No significant differences were observed for the SAC-C scores (p’s≥0.542). The quantity and severity of endorsed symptoms had the best diagnostic accuracy (AUC=0.76–0.77), negative predictive values (NPV=0.84–0.88), and negative likelihood ratios (-LR=0.22–0.31) of the Child SCAT5 scores.Conclusions: The symptom evaluation was the most effective component of the Child SCAT5 for differentiating between concussed and non-concussed middle school children on the same day as a suspected concussion.


Sign in / Sign up

Export Citation Format

Share Document