The Febrile Infant

PEDIATRICS ◽  
1994 ◽  
Vol 94 (3) ◽  
pp. 397-399 ◽  
Author(s):  
Paul L. McCarthy

There have now been three large prospective studies of febrile infants published within the past 3 years.1-3 Each reports on over 500 patients. Two of the reports, that of Baskin et al1 and that of Jaskiewicz et al,3 in the current issue of Pediatrics, focus on infants meeting low-risk criteria for serious bacterial illness. These two studies ask the question: "If the febrile infant meets these selected low-risk criteria, then with what degree of diagnostic certainty can the examining physician rule out a serious illness?" Statistically, this index of diagnostic certainty is termed negative predictive value. Jaskiewicz et al studied 511 low-risk febrile infants and used 437 of these patients to calculate negative predictive value, which was 98.9%.

Hypertension ◽  
2017 ◽  
Vol 70 (suppl_1) ◽  
Author(s):  
Michael G Buhnerkempe ◽  
Albert Botchway ◽  
Carlos Nolasco-Morales ◽  
Vivek Prakash ◽  
Lowell Hedquist ◽  
...  

Background: Apparent treatment resistant hypertension (aTRH) is associated with increased prevalence of secondary hypertension and adverse pressure-related clinical outcomes. We previously showed that cross-sectional prevalence estimates of aTRH are lower than its true prevalence as patients with uncontrolled hypertension undergoing intensification/optimization of therapy will, over time, increasingly satisfy diagnostic criteria for aTRH. Methods: aTRH (SBP and/or DBP at or above a clinically defined goal BP [140/90, 130/85, 130/80, or 125/75 mmHg] over two consecutive office visits when on ≥ 3 antihypertensive drug classes, including a diuretic; or SBP and DBP below goal when on ≥ 4 drug classes, including a diuretic) was assessed in an urban referral hypertension clinic in 924 patients ≥ 30 years old (57.7 ± 12.6) with at least two follow-up visits over 240 days. Patients were mostly African-American (86%; 795/924) and female (65%; 601/924). A minority (28.7%; 265/924) were taking diuretics at their index visit, and analyses were stratified according to this use. Risk for aTRH was estimated using logistic regression with patient characteristics at index visit as predictors. Performance of this risk score at discriminating aTRH status over follow-up was assessed using AUC and was internally validated using bootstrapping. Results: Amongst those on diuretics, 80/265 (30.2%) developed aTRH; the risk score discriminated well (AUC = 0.79, bootstrapped 95% CI [0.73, 0.84]). In patients not on a diuretic, 151/659 (22.9%) developed aTRH, and the risk score showed moderate, but significantly lower, discriminative ability (AUC = 0.71 [0.66, 0.74]; p < 0.001). In the diuretic and non-diuretic cohorts, 43/265 (16.2%) and 101/265 (38.1%) of patients, respectively, had estimated risks for development of aTRH < 10%. Of these low-risk patients, 42/43 (97.7%) and 97/101 (96.0%) did not develop aTRH (negative predictive value, diuretics – 0.95 [0.93, 1.00], no diuretics – 0.96 [0.91, 1.00]). Conclusions: We created a novel clinical score that discriminates well between those who will and will not develop aTRH, especially amongst those taking diuretics initially. Irrespective of diuretic treatment status, a low risk score had very high negative predictive value.


2018 ◽  
Vol 8 (5) ◽  
pp. 395-403 ◽  
Author(s):  
Neil Beri ◽  
Lori B Daniels ◽  
Allan Jaffe ◽  
Christian Mueller ◽  
Inder Anand ◽  
...  

Background: Copeptin in combination with troponin has been shown to have incremental value for the early rule-out of myocardial infarction, but its performance in Black patients specifically has never been examined. In light of a potential for wider use, data on copeptin in different relevant cohorts are needed. This is the first study to determine whether copeptin is equally effective at ruling out myocardial infarction in Black and Caucasian races. Methods: This analysis of the CHOPIN trial included 792 Black and 1075 Caucasian patients who presented to the emergency department with chest pain and had troponin-I and copeptin levels drawn. Results: One hundred and forty-nine patients were diagnosed with myocardial infarction (54 Black and 95 Caucasian). The negative predictive value of copeptin at a cut-off of 14 pmol/l (as in the CHOPIN study) for myocardial infarction was higher in Blacks (98.0%, 95% confidence interval (CI) 96.2–99.1%) than Caucasians (94.1%, 95% CI 92.1–95.7%). The sensitivity at 14 pmol/l was higher in Blacks (83.3%, 95% CI 70.7–92.1%) than Caucasians (53.7%, 95% CI 43.2–64.0%). After controlling for age, hypertension, heart failure, chronic kidney disease and body mass index in a logistic regression model, the interaction term had a P value of 0.03. A cut-off of 6 pmol/l showed similar sensitivity in Caucasians as 14 pmol/l in Blacks. Conclusions: This is the first study to identify a difference in the performance of copeptin to rule out myocardial infarction between Blacks and Caucasians, with increased negative predictive value and sensitivity in the Black population at a cut-off of 14 pmol/l. This also holds true for non-ST-segment elevation myocardial infarction and, although numbers were small, similar trends exist in the normal troponin population. This may have significant implications for early rule-out strategies using copeptin.


2021 ◽  
pp. emermed-2020-210973
Author(s):  
Carmine Cristiano Di Gioia ◽  
Nicola Artusi ◽  
Giovanni Xotta ◽  
Marco Bonsano ◽  
Ugo Giulio Sisto ◽  
...  

PurposeEarly diagnosis of COVID-19 has a crucial role in confining the spread among the population. Lung ultrasound (LUS) was included in the diagnostic pathway for its high sensitivity, low costs, non-invasiveness and safety. We aimed to test the sensitivity of LUS to rule out COVID-19 pneumonia (COVIDp) in a population of patients with suggestive symptoms.MethodsMulticentre prospective observational study in three EDs in Northeastern Italy during the first COVID-19 outbreak. A convenience sample of 235 patients admitted to the ED for symptoms suggestive COVIDp (fever, cough or shortness of breath) from 17 March 2020 to 26 April 2020 was enrolled. All patients underwent a sequential assessment involving: clinical examination, LUS, CXR and arterial blood gas. The index test under investigation was a standardised protocol of LUS compared with a pragmatic composite reference standard constituted by: clinical gestalt, real-time PCR test, radiological and blood gas results. Of the 235 enrolled patients, 90 were diagnosed with COVIDp according to the reference standard.ResultsAmong the patients with suspected COVIDp, the prevalence of SARS-CoV-2 was 38.3%. The sensitivity of LUS for diagnosing COVIDp was 85.6% (95% CI 76.6% to 92.1%); the specificity was 91.7% (95% CI 86.0% to 95.7%). The positive predictive value and the negative predictive value were 86.5% (95%CI 78.8% to 91.7%) and 91.1% (95% CI 86.1% to 94.4%) respectively. The diagnostic accuracy of LUS for COVIDp was 89.4% (95% CI 84.7% to 93.0%). The positive likelihood ratio was 10.3 (95% CI 6.0 to 17.9), and the negative likelihood ratio was 0.16 (95% CI 0.1 to 0.3).ConclusionIn a population with high SARS-CoV-2 prevalence, LUS has a high sensitivity (and negative predictive value) enough to rule out COVIDp in patients with suggestive symptoms. The role of LUS in diagnosing patients with COVIDp is perhaps even more promising. Nevertheless, further research with adequately powered studies is needed.Trial registration numberNCT04370275.


2020 ◽  
Vol 3 (Supplement_1) ◽  
pp. 59-60
Author(s):  
B D Cox ◽  
R Trasolini ◽  
C Galts ◽  
E M Yoshida ◽  
V Marquez

Abstract Background With the rates of non-alcoholic fatty liver disease (NAFLD) on the rise, the necessity of identifying patients at risk of cirrhosis and its complications is becoming ever more important. Liver biopsy remains the gold standard for assessing fibrosis, although the costs, risks, and availability prohibit its widespread use for at-risk patients. Fibroscan has proven to be a non-invasive and accurate way of assessing fibrosis, although the availability of this modality is often limited in the primary care setting. The Fibrosis-4 (FIB-4) and Non-Alcoholic Fatty Liver Disease Fibrosis Score (NFS) are scoring systems which incorporate commonly measured lab parameters and BMI to predict fibrosis. In this study, we compared FIB-4 and NFS values to fibroscan scores to assess the accuracy of these inexpensive and readily available scoring systems for detecting fibrosis. Aims The aim of this study was to determine if non-invasive and inexpensive scoring systems (FIB-4 and NFS) can be used to rule out fibrosis in non-alcoholic fatty liver disease with comparable efficacy to fibroscan. Ultimately, we aim to demonstrate that these scoring systems can be used as an alternative to fibroscan in some patients. Methods Data was collected from 317 patient charts from the Vancouver General Hepatology Clinic. 93 patients were removed from the study due to insufficient data (missing Fibroscan score or lab work necessary for FIB-4/NFS). For the remaining 224 patients, FIB-4 and NFS were calculated and compared to fibrosis scores both independently and in combination. Results: Using a NFS score cut-off of -1.455 and a fibroscan score cut-off of ≥8.7kPa, the NFS had a sensitivity of 71.9%, a specificity of 75%, and a negative predictive value of 94.1%. For a fibroscan score cut-off of ≥8.0kPa, the NFS had a sensitivity of 66.7%, a specificity of 75.7%, and a negative predictive value of 91.5%. Using a fibroscan score cut-off of ≥8.7kPa, the FIB-4 score had a sensitivity of 53.1%, specificity of 84.9%, and a negative predictive value of 91.6%. For a cut-off of ≥8.0kPa, it had a sensitivity of 51.3%, and 85.9%, and a negative predictive value of 89.3%. Conclusions: The NFS and FIB-4 are non-invasive scoring systems that have high sensitivity and negative predictive value for fibrosis when compared to fibroscan scores. These findings suggest that the NFS and FIB-4 can provide adequate reassurance to rule-out fibrosis in select patients, and has promising use in the primary care setting where fibroscan access is often limited. Funding Agencies None


Blood ◽  
2009 ◽  
Vol 114 (22) ◽  
pp. 1328-1328
Author(s):  
Prapti A. Patel ◽  
Catherine Burke ◽  
Karen Matevosyan ◽  
Eugene P. Frenkel ◽  
Ravindra Sarode ◽  
...  

Abstract Abstract 1328 Poster Board I-350 Background Heparin-induced thrombocytopenia (HIT) is a clinicopathologic diagnosis based on pretest clinical assessment aided by the 4T score and confirmed by laboratory testing for the presence of anti-heparin-platelet factor 4 antibody (HIT Ab). Prompt and accurate diagnosis of HIT is paramount due to an extraordinarily high risk of thrombosis, and the inherent risk of bleeding and high cost of direct thrombin inhibitors (DTI). The polyspecific enzyme linked immunosorbent assay (poly-ELISA) for the HIT Ab is the most commonly available test that detects IgG, IgM and IgA HIT Ab. The IgG-specific ELISA detects only IgG HIT Ab, the antibody that is known to cause HIT. The use of a second step ELISA with high-dose heparin in the reagent improves the specificity by demonstrating heparin-dependence of the antibody detected. The 4T score was developed to predict the probability of HIT. This score takes into account the severity of thrombocytopenia, timing of platelet fall with relation to heparin use, presence of new thrombosis, and other causes of thrombocytopenia. The high negative predictive value of the 4T score has been validated in multiple studies (Bryant et al, BJH 2008). However, the polyspecific ELISA was used in most of these studies, increasing the possibility of false positive tests. Study We have collected a database of patients being tested for HIT at our institution, where the IgG-specific ELISA along with high-dose heparin inhibition is being used to detect the HIT Ab. We performed a retrospective review of the last 165 ELISAs performed and the clinical circumstances of the testing. We hypothesize that the high negative predictive value of the 4T score combined with the more specific IgG-specific ELISA could be used to rule out HIT and avoid the cost of testing and empiric use of DTI. Results 4T scores of 165 patients were analyzed and compared to the results of the HIT Ab. The distribution of optical density units of the ELISA according to 4T score is shown in Figure 1. Of the 165 patients, 107 patients (64%) had a 4T score of 0-3. Of those 107 patients, 2 patients had OD>0.4; both had no significant inhibition with the addition of high-dose heparin (Table 1). Thus none of the 107 patients had a positive IgG-specific ELISA for HIT Ab. Thus having a low 4T score has a sensitivity of 100% for IgG-specific ELISA for HIT Ab, specificity of 71%. This translates to a positive predictive value of 26%, and a negative predictive value of 100% (Table 2). Conclusion Based on our data, we conclude that patients with low 4T scores (0-3) are highly unlikely to have HIT. Therefore, we propose that patients with a low 4T score do not need the laboratory workup or empiric treatment for HIT. Since the majority of patients suspected to have HIT have low 4T scores, reserving testing and empiric therapy for patients with intermediate and high 4T scores can lead to significant cost savings, and avoidance of potentially devastating bleeding complications with DTI therapy. Disclosures No relevant conflicts of interest to declare.


2001 ◽  
Vol 22 (08) ◽  
pp. 481-484 ◽  
Author(s):  
M. Sigfrido Rangel-Frausto ◽  
Samuel Ponce-de-León-Rosales ◽  
Claudia Martinez-Abaroa ◽  
Kaare Hasløv

Abstract Objective: To compare the performance of three purified protein derivative (PPD) formulations: Tubersol (Connaught); RT23, Statens Serum Institut (SSI); and RT23, Mexico, tested in Mexican populations at low and high risk for tuberculosis (TB). Design: A double-blinded clinical trial. Setting: A university hospital in Mexico City. Participants: The low-risk population was first or second-year medical students with no patient contact; the high-risk population was healthcare workers at a university hospital. Methods: Each of the study subjects received the three different PPD preparations. Risk factors for TB, including age, gender, occupation, bacille Calmette-Guerin (BCG) status, and TB exposure, were recorded. A 0.1-mL aliquot of each preparation was injected in the left and right forearms of volunteers using the Mantoux technique. Blind readings were done 48 to 72 hours later. Sensitivity and specificity were calculated at 10 mm of induration using Tubersol as the reference standard. The SSI tested the potency of the different PPD preparations in previously sensitized guinea pigs. Results: The low-risk population had a prevalence of positive PPD of 26%. In the low-risk population, RT23 prepared in Mexico, compared to the 5 TU of Tubersol, had a sensitivity of 51%, a specificity of 100%, a positive predictive value of 100%, and a negative predictive value of 86%. The RT23 prepared at the SSI had a sensitivity of 69%, a specificity of 99%, a positive predictive value of 95%, and a negative predictive value of 90%. In the high-risk population, the prevalence of positive PPD was 57%. The RT23 prepared in Mexico had a sensitivity of 33%, a specificity of 100%, and a positive predictive value of 53%; the RT23 prepared at the SSI had a sensitivity of 91%, a specificity of 98%, a positive predictive value of 98%, and a negative predictive value of 89%. RT23 used in Mexico had a potency of only 23% of that of the control. There was no statistical association among those with a positive PPD, irrespective of previous BCG vaccination (relative risk, 0.97; 95% confidence interval, 0.76-1.3; P=.78). Conclusions: Healthcare workers had twice the prevalence of positive PPD compared to medical students. RT23 prepared in Mexico had a low sensitivity in both populations compared to 5 TU of Tubersol and RT23 prepared at the SSI. Previous BCG vaccination did not correlate with a positive PPD. Low potency of the RT23 preparation in Mexico was confirmed in guinea pigs. Best intentions in a TB program are not enough if they are not followed by high-quality control.


2021 ◽  
Vol 5 (1) ◽  
pp. 030-037
Author(s):  
Cottel Nathalie ◽  
Dieme Aïcha ◽  
Orcel Véronique ◽  
Chantran Yannick ◽  
Bourgoin-Heck Mélisande ◽  
...  

Background: In France, from 30% to 35% of children suffer from multiple food allergies (MFA). The gold standard to diagnosis a food allergy is the oral food challenge (OFC) which is conducted in a hospital setting due to risk of anaphylaxis. The aim of this study was to evaluate an algorithm to predict OFCs at low risk of anaphylaxis that could safely be performed in an office-based setting. Methods: Children with MFA and at least one open OFC reactive or non-reactive to other allergens were included. The algorithm was based on multiple clinical and biological parameters related to food allergens, and designed mainly to predict “low-risk” OFCs i.e., practicable in an office-based setting. The algorithm was secondarily tested in a validation cohort. Results: Ninety-one children (median age 9 years) were included; 94% had at least one allergic comorbidity with an average of three OFCs per child. Of the 261 OFCs analyzed, most (192/261, 74%) were non-reactive. The algorithm failed to correctly predict 32 OFCs with a potentially detrimental consequence but among these only three children had severe symptoms. One hundred eighty-four of the 212 “low-risk” OFCs, (88%) were correctly predicted with a high positive predictive value (87%) and low negative predictive value (44%). These results were confirmed with a validation cohort giving a specificity of 98% and negative predictive value of 100%. Conclusion: This study suggests that the algorithm we present here can predict “low-risk” OFCs in children with MFA which could be safely conducted in an office-based setting. Our results must be confirmed with an algorithm-based machine-learning approach.


Blood ◽  
2018 ◽  
Vol 132 (Supplement 1) ◽  
pp. 981-981
Author(s):  
Bhavya S Doshi ◽  
Rachel S Rogers ◽  
Hilary B Whitworth ◽  
Emily Stabnick ◽  
Jessica Britton ◽  
...  

Abstract Von Willebrand disease (VWD) is the most common inherited bleeding disorder and is diagnosed via 3 cardinal features: 1) personal history of bleeding, 2) laboratory assays and 3) family history of VWD. The diagnosis of VWD in pediatric patients is complicated by von Willebrand factor (VWF) inter- and intra-assay variability, phlebotomy-induced physiologic stress increasing VWF levels from baseline, and a lack of prior hemostatic challenges. Given these challenges, NHLBI guidelines recommend repeated testing in patients with mildly low or normal levels and a high suspicion of VWD. However, no studies to date have evaluated the utility of repeat VWF testing in pediatric patients. Currently, our center's standard of care is to complete 3 separate sets of VWF testing to rule out VWD. The primary objective of this study was to determine the clinical variables associated with requiring more than 1 test to diagnose VWD and to establish a cutoff value for the first set of VWF assays above which further testing would not be informative. This single center retrospective cohort study included patients ≤ 18 years of age who either had a diagnosis of or evaluation for VWD between January 2012 and July 2017. Patients were excluded if the VWD laboratory evaluation was completed at another institution or due to the presence of another bleeding disorder. Medical charts were abstracted for demographic information, medications, reason for testing, family history of VWD, results of VWF assays, and other illness at time of evaluation. All patients had a retrospective ISTH BAT score completed. Data were analyzed using SAS and are reported as median (IQR). Statistical analysis was done with non-parametric tests (Mann-Whitney or Wilcoxon sign-rank) for two groups comparisons. Odds ratios were calculated using Fisher's Exact test for clinical factors associated with a VWD diagnosis. Univariate logistic regression was performed, modeling the odds of requiring more than 1 diagnostic test to diagnose VWD. One thousand unique patients were evaluated and 189 excluded, yielding a final cohort of 811 patients. Of these, 631 (77.8%) did not have VWD and 180 (22.2%) were diagnosed with VWD. Patients diagnosed with VWD were younger than those without (median age 5.8 vs 8.5 years, p=0.0019) and were more likely to have a family history of VWD (38% vs 22%, p < 0.0001) but there was no difference in race or sex between cohorts. As expected, patients in the VWD cohort had lower VWF activity (34 vs 78 IU/dl, p < 0.0001), VWF antigen (45 vs 89 IU/dL, p < 0.0001) and FVIII (57 vs 100 IU/dL, p < 0.0001) than those without VWD. ISTH BAT scores were higher in the VWD cohort (2.47 vs. 2.07, p = 0.027). As shown in Table 1 and Figure 1, increased odds of VWD diagnosis were noted in those tested due to a family history of VWD (OR 1.75, 95% CI 1.21-2.51) or abnormal coagulation studies (OR 1.61, 95% CI 1.07-2.24). Subjects with VWD required fewer tests than those without VWD (median 1 vs 2, p < 0.001). Univariate analysis failed to identify any significant associations with needing > 1 test for the diagnosis of VWD (Table 1), so a multivariable model was not performed. In 69.4% (125/180) of subjects with VWD, the first test was diagnostic. In receiver-operating curve analysis, the first VWF antigen and activity have a high power for diagnosis of VWD with AUC of 0.88 and 0.92, respectively (Fig 1). A cutoff of 100 IU/dL for VWF antigen or activity on first test yielded a sensitivity of 95%, specificity of 38% and negative predictive value of 96.6% for VWF antigen compared to 98%, 38% and 98.6% for VWF activity, respectively. Here we demonstrate that the majority of pediatric subjects had diagnostic VWF values on the first set of testing. Unfortunately, no clinical variables were identified for patients with VWD who required > 1 test for diagnosis. However increased odds of VWD diagnosis were noted in those with a family history of VWD and abnormal coagulation studies. A cutoff of 100 IU/dL for VWF activity or antigen on the first test resulted in > 95% negative predictive value to rule out the diagnosis. Pediatric patients without a family history of VWD and VWF levels > 100 IU/dL on initial test may not need further testing to rule out the diagnosis of VWD. Disclosures Doshi: Bayer Hemophilia Awards Program: Research Funding. Butler:Pfizer: Consultancy; Genentech: Consultancy; HemaBiologics: Consultancy.


2020 ◽  
Vol 33 (Supplement_1) ◽  
Author(s):  
V Hubaud ◽  
B Bottet ◽  
J Chenesseau ◽  
L Gust ◽  
I Bouabdallah ◽  
...  

Abstract   Anastomotic leakage is one of most severe complications after esophagectomy. There is no consensus on the best method of identification of such complications. Serum C-reactive protein measurement on postoperative day 5 (POD) has been reported to be reliable to rule-out leakage. Methods We prospectively assessed the medical records of consecutive post-esophagectomy patients from January 2019 to January 2020. We analyzed serum CRP and complete blood cell counts from the day before surgery to the POD5. A CRP level ≤ 150 mg/l at POD5 was considered sufficient to start oral feeding. In contrast a CRP level &gt; 150 mg/l at POD5 lead to a computed tomography (CT) with oral contrast to rule-out the presence of an anastomotic leakage. Anastomotic leakage was classified according to ECCG classification. Sensibility, sensitivity, positive and negative predictive value of CRP were calculated. Results Over a 12-month period, 52 patients were included (Figure 1). Measurement of CRP on POD5 was ≤150 mg/l in 34 (64%) patients (32 without fistula and 2 with fistula diagnosed after POD5) and &gt; 150 mg/L in 18 (36%) patients (8 without fistula and 10 with fistula). Twelve (23%) patients developed anastomotic fistula. The cutoff value of CRP ≤150 mg/l on the POD5 was associated with sensitivity 83%, specificity 80%, positive predictive value 56% and negative predictive value 94%. The CRP protocol allowed to avoid 30/52 (57%) unnecessary postoperative CT-scan. Conclusion On the basis of a high negative predictive value, a CRP level at POD5 ≤ 150 mg/l can be effective to eliminate an anastomotic leakage and to start oral feeding without any further exams. This information is useful in the context of ERAS protocols to reduce hospital discharge and decrease hospital costs.


2020 ◽  
pp. archdischild-2020-320468
Author(s):  
Roberto Velasco ◽  
Ainara Lejarzegi ◽  
Borja Gomez ◽  
Mercedes de la Torre ◽  
Isabel Duran ◽  
...  

ObjectivesTo develop and validate a prediction rule to identify well-appearing febrile infants aged ≤90 days with an abnormal urine dipstick at low risk of invasive bacterial infections (IBIs, bacteraemia or bacterial meningitis).DesignAmbispective, multicentre study.SettingThe derivation set in a single paediatric emergency department (ED) between 2003 and 2017. The validation set in 21 European EDs between December 2017 and November 2019.PatientsTwo sets of well-appearing febrile infants aged ≤90 days with an abnormal urine dipstick (either leucocyte esterase and/or nitrite positive test).Main outcomePrevalence of IBI in low-risk infants according to the RISeuP score.ResultsWe included 662 infants in the derivation set (IBI rate:5.2%). After logistic regression, we developed a score (RISeuP score) including age (≤15 days old), serum procalcitonin (≥0.6 ng/mL) and C reactive protein (≥20 mg/L) as risk factors. The absence of any risk factor had a sensitivity of 96.0% (95% CI 80.5% to 99.3%), a negative predictive value of 99.4% (95% CI 96.4% to 99.9%) and a specificity of 32.9% (95% CI 28.8% to 37.3%) for ruling out an IBI. Applying it in the 449 infants of the validation set (IBI rate 4.9%), sensitivity, negative predictive value and specificity were 100% (95% CI 87.1% to 100%), 100% (95% CI 97.3% to 100%) and 29.7% (95% CI 25.8% to 33.8%), respectively.ConclusionThis prediction rule accurately identified well-appearing febrile infants aged ≤90 days with an abnormal urine dipstick at low risk of IBI. This score can be used to guide initial clinical decision-making in these patients, selecting infants suitable for an outpatient management.


Sign in / Sign up

Export Citation Format

Share Document