prediction rule
Recently Published Documents


TOTAL DOCUMENTS

800
(FIVE YEARS 156)

H-INDEX

62
(FIVE YEARS 6)

2022 ◽  
Author(s):  
Blanca Ayuso ◽  
Antonio Lalueza ◽  
Estibaliz Arrieta ◽  
Eva Maria Romay ◽  
Álvaro Marchán-López ◽  
...  

Abstract BACKGROUND: Influenza viruses cause seasonal epidemics worldwide with a significant morbimortality burden. Clinical spectrum of Influenza is wide, being respiratory failure (RF) one of its most severe complications. This study aims to elaborate a clinical prediction rule of RF in hospitalized Influenza patients.METHODS: a prospective cohort study was conducted during two consecutive Influenza seasons (December 2016 - March 2017 and December 2017 - April 2018) including hospitalized adults with confirmed A or B Influenza infection. A prediction rule was derived using logistic regression and recursive partitioning, followed by internal cross-validation. External validation was performed on a retrospective cohort in a different hospital between December 2018 - May 2019. RESULTS: Overall, 707 patients were included in the derivation cohort and 285 in the validation cohort. RF rate was 6.8% and 11.6%, respectively. Chronic obstructive pulmonary disease, immunosuppression, radiological abnormalities, respiratory rate, lymphopenia, lactate dehydrogenase and C-reactive protein at admission were associated with RF. A four category-grouped seven point-score was derived including radiological abnormalities, lymphopenia, respiratory rate and lactate dehydrogenase. Final model area under the curve was 0.796 (0.714-0.877) in the derivation cohort and 0.773 (0.687-0.859) in the validation cohort (p<0.001 in both cases). The predicted model showed an adequate fit with the observed results (Fisher’s test p>0.43). CONCLUSION: we present a simple, discriminating, well-calibrated rule for an early prediction of the development of RF in hospitalized Influenza patients, with proper performance in an external validation cohort. This tool can be helpful in patient´s stratification during seasonal Influenza epidemics.


2022 ◽  
Author(s):  
Mark Ebell ◽  
Roya Hamadani ◽  
Autumn Kieber-Emmons

Importance Outpatient physicians need guidance to support their clinical decisions regarding management of patients with COVID-19, in particular whether to hospitalize a patient and if managed as an outpatient, how closely to follow them. Objective To develop and prospectively validate a clinical prediction rule to predict the likelihood of hospitalization for outpatients with COVID-19 that does not require laboratory testing or imaging. Design Derivation and temporal validation of a clinical prediction rule, and prospective validation of two externally derived clinical prediction rules. Setting Primary and Express care clinics in a Pennsylvania health system. Participants Patients 12 years and older presenting to outpatient clinics who had a positive polymerase chain reaction test for COVID-19. Main outcomes and measures Classification accuracy (percentage in each risk group hospitalized) and area under the receiver operating characteristic curve (AUC). Results Overall, 7.4% of outpatients in the early derivation cohort (5843 patients presenting before 3/1/21) and 5.5% in the late validation cohort (3806 patients presenting 3/1/21 or later) were ultimately hospitalized. We developed and temporally validated three risk scores that all included age, dyspnea, and the presence of comorbidities, adding respiratory rate for the second score and oxygen saturation for the third. All had very good overall accuracy (AUC 0.77 to 0.78) and classified over half of patients in the validation cohort as very low risk with a 1.7% or lower likelihood of hospitalization. Two externally derived risk scores identified more low risk patients, but with a higher overall risk of hospitalization (2.8%). Conclusions and relevance Simple risk scores applicable to outpatient and telehealth settings can identify patients with very low (1.6% to 1.7%), low (5.2% to 5.9%), moderate (14.7% to 15.6%), and high risk (32.0% to 34.2%) of hospitalization. The Lehigh Outpatient COVID Hospitalization (LOCH) risk score is available online as a free app: https://ebell-projects.shinyapps.io/LehighRiskScore/.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Toshihiro Sakakibara ◽  
Yuichiro Shindo ◽  
Daisuke Kobayashi ◽  
Masahiro Sano ◽  
Junya Okumura ◽  
...  

Abstract Background Prediction of inpatients with community-acquired pneumonia (CAP) at high risk for severe adverse events (SAEs) requiring higher-intensity treatment is critical. However, evidence regarding prediction rules applicable to all patients with CAP including those with healthcare-associated pneumonia (HCAP) is limited. The objective of this study is to develop and validate a new prediction system for SAEs in inpatients with CAP. Methods Logistic regression analysis was performed in 1334 inpatients of a prospective multicenter study to develop a multivariate model predicting SAEs (death, requirement of mechanical ventilation, and vasopressor support within 30 days after diagnosis). The developed ALL-COP-SCORE rule based on the multivariate model was validated in 643 inpatients in another prospective multicenter study. Results The ALL-COP SCORE rule included albumin (< 2 g/dL, 2 points; 2–3 g/dL, 1 point), white blood cell (< 4000 cells/μL, 3 points), chronic lung disease (1 point), confusion (2 points), PaO2/FIO2 ratio (< 200 mmHg, 3 points; 200–300 mmHg, 1 point), potassium (≥ 5.0 mEq/L, 2 points), arterial pH (< 7.35, 2 points), systolic blood pressure (< 90 mmHg, 2 points), PaCO2 (> 45 mmHg, 2 points), HCO3− (< 20 mmol/L, 1 point), respiratory rate (≥ 30 breaths/min, 1 point), pleural effusion (1 point), and extent of chest radiographical infiltration in unilateral lung (> 2/3, 2 points; 1/2–2/3, 1 point). Patients with 4–5, 6–7, and ≥ 8 points had 17%, 35%, and 52% increase in the probability of SAEs, respectively, whereas the probability of SAEs was 3% in patients with ≤ 3 points. The ALL-COP SCORE rule exhibited a higher area under the receiver operating characteristic curve (0.85) compared with the other predictive models, and an ALL-COP SCORE threshold of ≥ 4 points exhibited 92% sensitivity and 60% specificity. Conclusions ALL-COP SCORE rule can be useful to predict SAEs and aid in decision-making on treatment intensity for all inpatients with CAP including those with HCAP. Higher-intensity treatment should be considered in patients with CAP and an ALL-COP SCORE threshold of ≥ 4 points. Trial registration This study was registered with the University Medical Information Network in Japan, registration numbers UMIN000003306 and UMIN000009837.


2021 ◽  
Author(s):  
Melissa A. Pender ◽  
Timothy Smith ◽  
Ben J. Brintz ◽  
Prativa Pandey ◽  
Sanjaya Shrestha ◽  
...  

Background: Clinicians and travelers often have limited tools to differentiate bacterial from non-bacterial causes of travelers' diarrhea (TD). Development of a clinical prediction rule assessing the etiology of TD may help identify episodes of bacterial diarrhea and limit inappropriate antibiotic use. We aimed to identify predictors of bacterial diarrhea among clinical, demographic, and weather variables, as well as to develop and cross-validate a parsimonious predictive model. Methods: We collected de-identified clinical data from 457 international travelers with acute diarrhea presenting to two healthcare centers in Nepal and Thailand. We used conventional microbiologic and multiplex molecular methods to identify diarrheal etiology from stool samples. We used random forest and logistic regression to determine predictors of bacterial diarrhea. Results: We identified 195 cases of bacterial etiology, 63 viral, 125 mixed pathogens, 6 protozoal/parasite, and 68 cases without a detected pathogen. Random forest regression indicated that the strongest predictors of bacterial over viral or non-detected etiologies were average location-specific environmental temperature and RBC on stool microscopy. In 5-fold cross-validation, the parsimonious model with the highest discriminative performance had an AUC of 0.73 using 3 variables with calibration intercept -0.01 (SD 0.31) and slope 0.95 (SD 0.36). Conclusions: We identified environmental temperature, a location-specific parameter, as an important predictor of bacterial TD, among traditional patient-specific parameters predictive of etiology. Future work includes further validation and the development of a clinical decision-support tool to inform appropriate use of antibiotics in TD.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
K. Hemming ◽  
M. Taljaard

AbstractClinical prediction models are developed with the ultimate aim of improving patient outcomes, and are often turned into prediction rules (e.g. classifying people as low/high risk using cut-points of predicted risk) at some point during the development stage. Prediction rules often have reasonable ability to either rule-in or rule-out disease (or another event), but rarely both. When a prediction model is intended to be used as a prediction rule, conveying its performance using the C-statistic, the most commonly reported model performance measure, does not provide information on the magnitude of the trade-offs. Yet, it is important that these trade-offs are clear, for example, to health professionals who might implement the prediction rule. This can be viewed as a form of knowledge translation. When communicating information on trade-offs to patients and the public there is a large body of evidence that indicates natural frequencies are most easily understood, and one particularly well-received way of depicting the natural frequency information is to use population diagrams. There is also evidence that health professionals benefit from information presented in this way.Here we illustrate how the implications of the trade-offs associated with prediction rules can be more readily appreciated when using natural frequencies. We recommend that the reporting of the performance of prediction rules should (1) present information using natural frequencies across a range of cut-points to inform the choice of plausible cut-points and (2) when the prediction rule is recommended for clinical use at a particular cut-point the implications of the trade-offs are communicated using population diagrams. Using two existing prediction rules, we illustrate how these methods offer a means of effectively and transparently communicating essential information about trade-offs associated with prediction rules.


2021 ◽  
pp. 1-10
Author(s):  
I. Krug ◽  
J. Linardon ◽  
C. Greenwood ◽  
G. Youssef ◽  
J. Treasure ◽  
...  

Abstract Background Despite a wide range of proposed risk factors and theoretical models, prediction of eating disorder (ED) onset remains poor. This study undertook the first comparison of two machine learning (ML) approaches [penalised logistic regression (LASSO), and prediction rule ensembles (PREs)] to conventional logistic regression (LR) models to enhance prediction of ED onset and differential ED diagnoses from a range of putative risk factors. Method Data were part of a European Project and comprised 1402 participants, 642 ED patients [52% with anorexia nervosa (AN) and 40% with bulimia nervosa (BN)] and 760 controls. The Cross-Cultural Risk Factor Questionnaire, which assesses retrospectively a range of sociocultural and psychological ED risk factors occurring before the age of 12 years (46 predictors in total), was used. Results All three statistical approaches had satisfactory model accuracy, with an average area under the curve (AUC) of 86% for predicting ED onset and 70% for predicting AN v. BN. Predictive performance was greatest for the two regression methods (LR and LASSO), although the PRE technique relied on fewer predictors with comparable accuracy. The individual risk factors differed depending on the outcome classification (EDs v. non-EDs and AN v. BN). Conclusions Even though the conventional LR performed comparably to the ML approaches in terms of predictive accuracy, the ML methods produced more parsimonious predictive models. ML approaches offer a viable way to modify screening practices for ED risk that balance accuracy against participant burden.


Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 4635-4635
Author(s):  
Benjamin Chin-Yee ◽  
Pratibha Bhai ◽  
Ian Cheong ◽  
Maxim Matyashin ◽  
Cyrus C. Hsia ◽  
...  

Abstract Background: The widespread availability of molecular testing for JAK2 mutations has facilitated the diagnosis of polycythemia vera (PV) but also raises the concern of test overutilization in patients referred for elevated hemoglobin. At our institution, we have observed increased molecular testing in these patients with declining rates of JAK2 mutation positivity, suggesting that a prediction rule could be useful to guide such testing. In this study, we report the derivation and validation of a simple rule using complete blood count (CBC) parameters to predict the likelihood of having a JAK2 mutation in patients referred for elevated hemoglobin. Methods: We examined all patients with elevated hemoglobin (≥160 g/L for women, or ≥165 g/L for men), who underwent JAK2 mutation testing using the Next-Generation Sequencing (NGS)-based Oncomine Myeloid Research Assay (ThermoFisher Scientific, MA, USA), between 2018 and 2021 at the London Health Sciences Centre in Ontario, Canada. We extracted data including age and sex as well as CBC parameters at the time of testing, including hemoglobin, hematocrit, erythrocytes, leukocytes, neutrophils, platelets and mean corpuscular volume. All CBCs were performed on a Sysmex XN Analyzer (Sysmex Corporation, Japan). In the derivation cohort, JAK2-positive and -negative groups were compared using Student's t-tests or c 2 tests, as appropriate. We dichotomized potentially significant continuous variables at an optimal cut-off point using receiving operating characteristic curves. Potentially significant predictors were evaluated using multiple variable stepwise logistic regression analysis with JAK2 positivity as the dependent variable. The model was evaluated using Hosmer-Lemeshow tests and pseudo-R2 measures. A dichotomous score was derived based on the presence or absence of significant variables and subsequently evaluated and internally validated using logistic regression and c 2 tests using non-parametric bootstrapping with 1000 samples. The model was subsequently validated in the second cohort. Results: The derivation cohort included 308 patients tested between January 9, 2018 and December 19, 2019, and the validation cohort included 223 patients tested between January 7, 2020 and May 12, 2021. The characteristics of both cohorts are shown in Table 1. The final model included platelets above the upper quintile (308 × 10 9/L) and erythrocytes above the upper quartile (6.17 × 10 12/L) and a score of one was assigned to patients with either of these characteristics. The odds ratio for JAK2 positivity in patients with a score of 1 was 14.6 (95% CI 5.5-38.8) compared to those with a score of 0. The model had a sensitivity of 87.8% and a negative predictive value of 97.4% in the derivation cohort, and of 100% for both in the validation cohort. The percentage of JAK2 positive patients in patients with a score of 1 was 28%. The percent of false negatives was 2.6% (95% CI 1.1-6.0) and 0 (95% CI 0-2.8) in the derivation and validation cohorts, respectively. The use of this rule to guide molecular testing would have resulted in approximately 60% fewer tests. Conclusion: We developed and validated a simple rule to predict the likelihood of JAK2 mutation positivity in patients with a hemoglobin of 160 or higher, based on CBC parameters with a high negative predictive value (Figure 1). If implemented, this prediction rule could result in a significant reduction in molecular testing avoiding 60% or approximately 100 tests per year at our institution. This approach would be particularly beneficial for broader health system management of hematological malignancies, facilitating the reallocation of resources to emerging higher-yield molecular diagnostic investigation (Kawata et al., BJH 2021). Figure 1 Figure 1. Disclosures No relevant conflicts of interest to declare.


2021 ◽  
pp. 204589402110597
Author(s):  
cijun Luo ◽  
Hong-Ling Qiu ◽  
Chang-wei Wu ◽  
Jing He ◽  
Ping Yuan ◽  
...  

Background: Cardiopulmonary exercise testing (CPET) and pulmonary function test (PFT) are important methods for detecting human cardio-pulmonary function. Whether they could screen vasoresponsiveness in idiopathic pulmonary artery hypertension (IPAH) patients remains undefined. Methods: One hundred thirty-two IPAH patients with complete data were retrospectively enrolled. Patients were classified as vasodilator-responsive (VR) group and vasodilator-nonresponsive (VNR) group on the basis of the acute vasodilator test. PFT and CPET were assessed subsequently and all patients were confirmed by right heart catheterization. We analyzed CPET and PFT data and derived a prediction rule to screen vasodilator-responsive patients in IPAH. Results: Nineteen of VR-IPAH and 113 of VNR-IPAH patients were retrospectively enrolled. Compared with VNR-IPAH patients, VR-IPAH patients had less severe hemodynamic effects (lower RAP, m PAP, PAWP and PVR). And VR-IPAH patients had higher anaerobic threshold (AT), peak partial pressure of end-tidal carbon dioxide (PETCO2), oxygen uptake efficiency (OUEP) and FEV1/FVC (P all < 0.05), while lower peak partial pressure of end-tidal oxygen (PETO2) and minute ventilation (VE)/carbon dioxide output (VCO2) slope (P all < 0.05). FEV1/FVC (Odds Ratio [OR]: 1.14, 95% confidence interval [CI]: 1.02-1.26, P = 0.02) and PeakPETCO2 (OR: 1.13, 95% CI: 1.01-1.26, P = 0.04) were independent predictors of VR adjusted for age, sex and body mass index. A novel formula (= -16.17 + 0.123 × PeakPETCO2 + 0.127×FEV1/FVC) reached a high area under the curve value of 0.8 (P = 0.003). Combined with these parameters, the optimal cutoff value of this model for detection of VR is -1.06, with a specificity of 91% and sensitivity of 67%. Conclusions: Compared with VNR-IPAH patients, VR-IPAH patients had less severe hemodynamic effects. Higher FEV1/FVC and higher peak PETCO2 were associated with increased odds for vasoresponsiveness. A novel score combining Peak PETCO2 and FEV1/FVC provides high specificity to predict VR patients among IPAH.


2021 ◽  
Vol 65 ◽  
pp. 216-220
Author(s):  
Ahmad Shafie Jameran ◽  
Saw Kian Cheah ◽  
Mohd Nizam Tzar ◽  
Qurratu Aini Musthafa ◽  
Hsueh Jing Low ◽  
...  

2021 ◽  
Vol 26 (Supplement_1) ◽  
pp. e74-e75
Author(s):  
Zachary Dionisopoulos ◽  
Erin Strumpf ◽  
Gregory Anderson ◽  
Andre Guigui ◽  
Brett Burstein

Abstract Primary Subject area Emergency Medicine - Paediatric Background Fever in the first months of life is among the most common clinical problems in pediatric healthcare. Nearly 2% of all infants will be evaluated for fever in an Emergency Department (ED) and approximately 10% harbor life-threatening serious bacterial infections (SBIs). The Rochester criteria are most widely used criteria for risk-stratification and predate modern biomarkers including procalcitonin (PCT). Recently, a high-performing prediction rule incorporating PCT was derived by the Pediatric Emergency Care Applied Research Network (PECARN). At present, PCT is not available in all clinical settings, limited largely by test cost. Objectives Compare the medical costs associated with PECARN and Rochester risk-stratification strategies using contemporary price, epidemiologic and test characteristic data. Design/Methods We assessed hospital-level costs associated with the door-to-discharge care of all well-appearing febrile infants aged ≤ 60 days evaluated at an urban tertiary pediatric hospital between April 2016 and March 2019. Direct and indirect ED and inpatient costs were obtained from provincial Ministry of Health data. Real-world costs were then incorporated into a probabilistic model for a cohort of equal size using either Rochester or PECARN risk-stratification, accounting for the added incremental cost of PCT ($24.86CAD). Models used an 8.4% pooled SBI risk, and Sn/Sp for Rochester and PECARN of 94%/49% and 98%/63%, respectively. Modeling was calculated under 4 scenarios; true positive with hospitalization, false negative with return visit and hospitalization, false positive with hospitalization, true negative with ED discharge. All costs were calculated in Canadian dollars. Results During the 3-year study period, 1168 index infant encounters met inclusion and were analyzed for hospital trajectory costs. Median costs per infant were $323 (IQR $286-$393) for infants discharged from the ED with no SBI, $2356 (IQR $1858-$3120) for infants hospitalized with no SBI, $3150 (IQR $2352-$4201) for hospitalized infants treated for a SBI, and $3763 (IQR $2146-$5180) for infants discharged from the ED ultimately requiring hospitalization with a missed SBI. For a cohort of 1168 infants, cost-per-infant using PECARN risk-stratification was $1332 (IQR $1062-$1739), compared to $1515 (IQR $1198-$1992) using Rochester. PECARN criteria would be expected to produce an overall savings of 12.1% for the modeled cohort ($1,556,432 vs $1,769,339). Under pessimistic and optimistic model assumptions, total savings were 4.9% and 18.3%, respectively. Costs borne by families were not considered, nor were the indirect benefits of reduced unnecessary invasive testing, hospitalizations and broad-spectrum antibiotic use. Conclusion Risk-stratification of febrile infants using PECARN prediction rules would produce important cost-savings due to superior test characteristics offsetting upfront PCT-associated costs. Such a strategy would also likely result in unmodeled non-monetary family-centered and healthcare system benefits. Real-world cost-effectiveness studies are needed.


Sign in / Sign up

Export Citation Format

Share Document