scholarly journals Independent, external validation of clinical prediction rules for the identification of extended-spectrum β-lactamase-producing Enterobacterales, University Hospital Basel, Switzerland, January 2010 to December 2016

2020 ◽  
Vol 25 (26) ◽  
Author(s):  
Isabelle Vock ◽  
Lisandra Aguilar-Bultet ◽  
Adrian Egli ◽  
Pranita D Tamma ◽  
Sarah Tschudin-Sutter

Background Algorithms for predicting infection with extended-spectrum β-lactamase-producing Enterobacterales (ESBL-PE) on hospital admission or in patients with bacteraemia have been proposed, aiming to optimise empiric treatment decisions. Aim We sought to confirm external validity and transferability of two published prediction models as well as their integral components. Methods We performed a retrospective case–control study at University Hospital Basel, Switzerland. Consecutive patients with ESBL-producing Escherichia coli or Klebsiella pneumoniae isolated from blood samples between 1 January 2010 and 31 December 2016 were included. For each case, three non-ESBL-producing controls matching for date of detection and bacterial species were identified. The main outcome measure was the ability to accurately predict infection with ESBL-PE by measures of discrimination and calibration. Results Overall, 376 patients (94 patients, 282 controls) were analysed. Performance measures for prediction of ESBL-PE infection of both prediction models indicate adequate measures of calibration, but poor discrimination (area under receiver-operating curve: 0.627 and 0.651). History of ESBL-PE colonisation or infection was the single most predictive independent risk factor for ESBL-PE infection with high specificity (97%), low sensitivity (34%) and balanced positive and negative predictive values (80% and 82%). Conclusions Applying published prediction models to institutions these were not derived from, may result in substantial misclassification of patients considered as being at risk, potentially leading to wrong allocation of antibiotic treatment, negatively affecting patient outcomes and overall resistance rates in the long term. Future prediction models need to address differences in local epidemiology by allowing for customisation according to different settings.

Author(s):  
Luma Cordeiro Rodrigues ◽  
Silvia Ferrite ◽  
Ana Paula Corona

Abstract Purpose This article investigates the validity of a smartphone-based audiometry for hearing screening to identify hearing loss in workers exposed to noise. Research Design This is a validation study comparing hearing screening with the hearTest to conventional audiometry. The study population included all workers who attended the Brazilian Social Service of Industry to undergo periodic examinations. Sensitivity, specificity, the Youden index, and positive (PPV) and negative predictive values (NPV) for hearing screening obtained by the hearTest were estimated according to three definitions of hearing loss: any threshold greater than 25 dB hearing level (HL), the mean auditory thresholds for 0.5, 1, 2, and 4 kHz greater than 25 dB HL, and the mean thresholds for 3, 4, and 6 kHz greater than 25 dB HL. Note that 95% confidence intervals were calculated for all measurements. Results A total of 232 workers participated in the study. Hearing screening with the hearTest presented good sensitivity (93.8%), specificity (83.9%), and Youden index (77.7%) values, a NPV (97.2%), and a low PPV (69.0%) for the identification of hearing loss defined as any auditory threshold greater than 25 dB HL. For the other definitions of hearing loss, we observed high specificity, PPV and NPV, as well as low sensitivity and Youden index. Conclusion The hearTest is an accurate hearing screening tool to identify hearing loss in workers exposed to noise, including those with noise-induced hearing loss, although it does not replace conventional audiometry.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Yuwei Cheng ◽  
Elijah Paintsil ◽  
Musie Ghebremichael

The syndromic diagnosis of sexually transmitted infections (STIs) is widely recognized as the most practical, feasible, and cost-effective diagnostic tool in resource-limited settings. This study assessed the diagnostic accuracy of syndromic versus laboratory testing of STIs among 794 men randomly selected from the Moshi district of Tanzania. Participants were interviewed with a questionnaire that included questions on history of STIs symptoms. Blood and urine samples were taken from the participants for laboratory testing. Only 7.9% of the men reported any symptoms of STI; however, 46% of them tested positive for at least one STI. There was little agreement between syndromic and laboratory-confirmed diagnoses, with low sensitivity (0.4%–7.4%) and high specificity (96%–100%) observed for each individual symptom. The area under the receiver-operating curve was 0.528 (95% CI: 0.505–0.550), indicating that the syndromic approach has a 52.8% probability of correctly identifying STIs in study participants. In conclusion, whenever possible, laboratory diagnosis of STI should be favored over syndromic diagnosis.


Author(s):  
Jenny Klimpel ◽  
Lorenz Weidhase ◽  
Michael Bernhard ◽  
André Gries ◽  
Sirak Petros

Abstract Background Sepsis is defined as a life-threatening organ dysfunction due to a dysregulated inflammation following an infection. However, the impact of this definition on patient care is not fully clear. This study investigated the impact of the current definition on ICU admission of patients with infection. Methods We performed a prospective observational study over twelve months on consecutive patients presented to our emergency department and admitted for infection. We analyzed the predictive values of the quick sequential organ failure assessment (qSOFA) score, the SOFA score and blood lactate regarding ICU admission. Results We included 916 patients with the diagnosis of infection. Median age was 74 years (IQR 62–82 years), and 56.3% were males. There were 219 direct ICU admissions and 697 general ward admissions. A qSOFA score of ≥2 points had 52.9% sensitivity and 98.3% specificity regarding sepsis diagnosis. A qSOFA score of ≥2 points had 87.2% specificity but only 39.9% sensitivity to predict ICU admission. A SOFA score of ≥2 points had 97.4% sensitivity, but only 17.1% specificity to predict ICU admission, while a SOFA score of ≥4 points predicted ICU admission with 82.6% sensitivity and 71.7% specificity. The area under the receiver operating curve regarding ICU admission was 0.81 (95 CI, 0.77–0.86) for SOFA score, 0.55 (95% CI, 0.48–0.61) for blood lactate, and only 0.34 (95% CI, 0.28–0.40) for qSOFA on emergency department presentation. Conclusions While a positive qSOFA score had a high specificity regarding ICU admission, the low sensitivity of the score among septic patients as well as among ICU admissions considerably limited its value in routine patient management. The SOFA score was the better predictor of ICU admission, while the predictive value of blood lactate was equivocal.


2019 ◽  
Vol 54 (3) ◽  
pp. 1900224 ◽  
Author(s):  
Sanja Stanojevic ◽  
Jenna Sykes ◽  
Anne L. Stephenson ◽  
Shawn D. Aaron ◽  
George A. Whitmore

IntroductionWe aimed to develop a clinical tool for predicting 1- and 2-year risk of death for patients with cystic fibrosis (CF). The model considers patients' overall health status as well as risk of intermittent shock events in calculating the risk of death.MethodsCanadian CF Registry data from 1982 to 2015 were used to develop a predictive risk model using threshold regression. A 2-year risk of death estimated conditional probability of surviving the second year given survival for the first year. UK CF Registry data from 2007 to 2013 were used to externally validate the model.ResultsThe combined effect of CF chronic health status and CF intermittent shock risk provided a simple clinical scoring tool for assessing 1-year and 2-year risk of death for an individual CF patient. At a threshold risk of death of ≥20%, the 1-year model had a sensitivity of 74% and specificity of 96%. The area under the receiver operating curve (AUC) for the 2-year mortality model was significantly greater than the AUC for a model that predicted survival based on forced expiratory volume in 1 s <30% predicted (AUC 0.95 versus 0.68 respectively, p<0.001). The Canadian-derived model validated well with the UK data and correctly identified 79% of deaths and 95% of survivors in a single year in the UK.ConclusionsThe prediction models provide an accurate risk of death over a 1- and 2-year time horizon. The models performed equally well when validated in an independent UK CF population.


2019 ◽  
Vol 14 (4) ◽  
pp. 506-514 ◽  
Author(s):  
Pavan Kumar Bhatraju ◽  
Leila R. Zelnick ◽  
Ronit Katz ◽  
Carmen Mikacenic ◽  
Susanna Kosamo ◽  
...  

Background and objectivesCritically ill patients with worsening AKI are at high risk for poor outcomes. Predicting which patients will experience progression of AKI remains elusive. We sought to develop and validate a risk model for predicting severe AKI within 72 hours after intensive care unit admission.Design, setting, participants, & measurementsWe applied least absolute shrinkage and selection operator regression methodology to two prospectively enrolled, critically ill cohorts of patients who met criteria for the systemic inflammatory response syndrome, enrolled within 24–48 hours after hospital admission. The risk models were derived and internally validated in 1075 patients and externally validated in 262 patients. Demographics and laboratory and plasma biomarkers of inflammation or endothelial dysfunction were used in the prediction models. Severe AKI was defined as Kidney Disease Improving Global Outcomes (KDIGO) stage 2 or 3.ResultsSevere AKI developed in 62 (8%) patients in the derivation, 26 (8%) patients in the internal validation, and 15 (6%) patients in the external validation cohorts. In the derivation cohort, a three-variable model (age, cirrhosis, and soluble TNF receptor-1 concentrations [ACT]) had a c-statistic of 0.95 (95% confidence interval [95% CI], 0.91 to 0.97). The ACT model performed well in the internal (c-statistic, 0.90; 95% CI, 0.82 to 0.96) and external (c-statistic, 0.93; 95% CI, 0.89 to 0.97) validation cohorts. The ACT model had moderate positive predictive values (0.50–0.95) and high negative predictive values (0.94–0.95) for severe AKI in all three cohorts.ConclusionsACT is a simple, robust model that could be applied to improve risk prognostication and better target clinical trial enrollment in critically ill patients with AKI.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Morenike Oluwatoyin Folayan ◽  
Peter Alimi ◽  
Micheal O. Alade ◽  
Maha El Tantawi ◽  
Abiola A. Adeniyi ◽  
...  

Abstract Background To determine the validity of maternal reports of the presence of early childhood caries (ECC), and to identify maternal variables that increase the accuracy of the reports. Methods This secondary data analysis included 1155 mother–child dyads, recruited through a multi-stage sampling household approach in Ile-Ife Nigeria. Survey data included maternal characteristics (age, monthly income, decision-making ability) and maternal perception about whether or not her child (age 6 months to 5 years old) had ECC. Presence of ECC was clinically determined using the dmft index. Maternally reported and clinically determined ECC presence were compared using a chi-squared test. McNemar's test was used to assess the similarity of maternal and clinical reports of ECC. Sensitivity, specificity, positive and negative predictive values, absolute bias, relative bias and inflation factor were calculated. Statistical significance was determined at p < 0.05. Results The clinically-determined ECC prevalence was 4.6% (95% Confidence interval [CI]: 3.5–5.0) while the maternal-reported ECC prevalence was 3.4% (CI 2.4–4.6). Maternal reports underestimated the prevalence of ECC by 26.1% in comparison to the clinical evaluation. The results indicate low sensitivity (9.43%; CI 3.13–20.70) but high specificity (96.9%; CI 95.7–97.9). The positive predictive value was 12.8% (CI 4.3–27.4) while the negative predictive value was 95.7% (CI 94.3–96.8). The inflation factor for maternally reported ECC was 1.4. Sensitivity (50.0%; CI 6.8–93.2) and positive predictive value were highest (33.3%; CI 4.3–77.7) when the child had a history of visiting the dental clinic. Conclusions Mothers under-reported the presence of ECC in their children in this study population. The low sensitivity and positive predictive values of maternal report of ECC indicates that maternal reporting of presence of ECC may not be used as a valid tool to measure ECC in public health surveys. The high specificity and negative predictive values indicate that their report is a good measure of the absence of ECC in the study population. Child’s history of dental service utilization may be a proxy measure of presence of ECC.


2009 ◽  
Vol 66 (12) ◽  
pp. 992-997
Author(s):  
Zorica Lepsanovic ◽  
Dejana Savic ◽  
Branka Tomanovic

Background/Aim. Traditional methods for detection of mycobacteria, such as microscopic examination for the presence of acid-fast bacilli and isolation of the organism by culture, have either a low sensitivity and/or specificity, or take weeks before a definite result is available. Molecular methods, especially those based on nucleic acid amplification, are rapid diagnostic methods which combine high sensitivity and high specificity. The aim of this study was to determine the usefulness of the Cobas Amplicor Mycobacterium tuberculosis polymerase chain reaction (CAPCR) assay in detecting the tuberculosis cause in respiratory and nonrespiratory specimens (compared to culture). Methods. Specimens were decontaminated by the N-acetyl-L-cystein- NaOH method. A 500 ?L aliquot of the processed specimen were used for inoculation of L?wenstein-Jensen (L-J) slants, a drop for acid-fast staining, and 100 ?L for PCR. The Cobas Amplicor PCR was performed according to the manufacturer's instructions. Results. A total of 110 respiratory and 355 nonrespiratory specimens were investigated. After resolving discrepancies by reviewing medical history, overall sensitivity, specificity, and positive and negative predictive values for CA-PCR assay compared to culture, were 83%, 100%, 100%, and 96.8%, respectively. In comparison, they were 50%, 99.7%, 87.5%, and 98%, respectively, for the nonrespiratory specimens. The inhibition rate was 2.8% for respiratory, and 7.6% for nonrespiratory specimens. Conclusion. CA-PCR is a reliable assay that enables specialists to start treatment promptly on a positive test result. Lower value for specificity in a group of nonrespiratory specimens is a consequence of an extremely small number of mycobacteria in some of them.


Author(s):  
Christine D. Butkiewicz ◽  
Cody J. Alcott ◽  
Janelle Renschler ◽  
Lawrence J. Wheat ◽  
Lisa F. Shubitz

Abstract OBJECTIVE To evaluate the utility of enzyme immunoassays (EIAs) for the detection of Coccidioides antigen and antibody in CSF in the diagnosis of CNS coccidioidomycosis in dogs. ANIMALS 51 dogs evaluated for CNS disease in a single specialty center in Tucson in 2016. PROCEDURES Excess CSF after routine analysis was banked after collection from dogs presented to the neurology service. Samples were tested by EIA for presence of Coccidioides antigen and antibody. Clinical data were collected from medical records retrospectively. RESULTS 22 dogs were diagnosed with CNS coccidioidomycosis (CCM) or another neurologic disease (non-CCM). These groups of dogs overlapped in the presenting complaints, MRI results, and routine CSF analysis results. Four dogs, all with CCM, had positive antigen EIA results. With clinical diagnosis used as the reference standard, CSF antigen testing had low sensitivity (20%) but high specificity (100%) for diagnosis of CCM. Ten dogs with CCM and 4 dogs with other diagnoses had antibody detected in CSF by EIA. Sensitivity of CSF antibody testing was 46%, specificity was 86%, and positive and negative predictive values for the study population were 71% and 68%, respectively. Clinical Relevance Diagnosis of CNS coccidioidomycosis in dogs in an endemic region was hampered by overlap of clinical signs with other neurologic disorders and the low sensitivity of confirmatory diagnostics. The evaluated Coccidioides-specific EIAs performed on CSF can aid in the diagnosis. A prospective study is warranted to corroborate and refine these preliminary findings


1999 ◽  
Vol 175 (6) ◽  
pp. 537-543 ◽  
Author(s):  
Shazad Amin ◽  
Swaran P. Singh ◽  
John Brewin ◽  
Peter B. Jones ◽  
Ian Medley ◽  
...  

BackgroundThe temporal stability of a diagnosis is one measure of its predictive validity.AimsTo measure diagnostic stability in first-episode psychosis using ICD–10 and DSM–III–R.MethodBetween 1992 and 1994 we ascertained a cohort of persons with first-episode psychosis (n=168), assigning to each a consensus diagnosis. At three-year follow-up, longitudinal consensus diagnoses, blind to onset diagnoses, were made. Stability was measured by the positive predictive values (PPVs) of onset diagnoses. For onset schizophrenia, we also calculated sensitivity, specificity and concordance (κ).ResultsFirst-episode ICD–10 and DSM–III–R schizophrenia had a PPV of over 80% at three years. Over one-third of cases with ICD–10 F20 schizophrenia at three years had non-schizophrenia diagnoses at onset. Manic psychoses showed the highest PPV (91%). For onset schizophrenia, both systems had high specificity (ICD–10: 89; DSM–III–R: 93%), but low sensitivity (ICD–10: 64%; DSM–III–R: 51%) and moderate concordance (ICD–10: 0.54; DSM–III–R: 0.46).ConclusionsBipolar disorders and schizophrenia showed the highest stability. DSM–III–R schizophrenia did not have greater stability than ICD–10 schizophrenia.


2021 ◽  
Author(s):  
Andrew J. Lawrence ◽  
Daniel Stahl ◽  
Suqian Duan ◽  
Diede Fennema ◽  
Tanja Jaeckle ◽  
...  

AbstractBackgroundOvergeneralised self-blaming emotions, such as self-disgust, are core symptoms of major depressive disorder (MDD) and prompt specific actions (i.e. “action tendencies”), which are more functionally relevant than the emotions themselves. We have recently shown, using a novel cognitive task, that when feeling self-blaming emotions, maladaptive action tendencies (feeling like “hiding” and like “creating a distance from oneself”) and an overgeneralised perception of control are characteristic of MDD, even after remission of symptoms. Here, we probed the potential of this cognitive signature, and its combination with previously employed fMRI measures, to predict individual recurrence risk. For this purpose, we developed a user-friendly hybrid machine-/statistical-learning tool which we make freely available.Methods52 medication-free remitted MDD patients, who had completed the Action Tendencies Task and our self-blame fMRI task at baseline, were followed up clinically over 14-months to determine recurrence. Prospective prediction models included baseline maladaptive self-blame-related action tendencies and anterior temporal fMRI connectivity patterns across a set of fronto-limbic a priori regions of interest, as well as established clinical and standard psychological predictors. Prediction models used elastic-net regularised logistic regression with nested 10-fold cross-validation.ResultsCross-validated discrimination was highly promising (AuC≥0.86), and positive predictive values over 80% were achieved when including fMRI in multi-modal models, but only up to 71% (AuC≤.74) when solely relying on cognitive and clinical measures.ConclusionsThis shows the high potential of multi-modal signatures of self-blaming biases to predict recurrence risk at an individual level, and calls for external validation in an independent sample.


Sign in / Sign up

Export Citation Format

Share Document