RISK-Prediction MODEL for DEVELOPMENT of Venous Thromboembolism In HOSPITALIZED CHILDREN

Blood ◽  
2011 ◽  
Vol 118 (21) ◽  
pp. 2281-2281
Author(s):  
Arash Mahajerin ◽  
Robert Fallon ◽  
George Eckert ◽  
Mark Heiny ◽  
Terry Vik ◽  
...  

Abstract Abstract 2281 Background: The prevalence of venous thromboembolism (VTE) is steeply rising in hospitalized children. Considering the immediate and long-term complications of VTE and its impact on health-care utilization, strategies to prevent the occurrence of VTE are urgently needed. Identifying children with predisposition for VTE and using VTE prophylaxis for this subset of patients may help to reduce the prevalence of VTE. Objective: To develop a clinical risk-prediction tool to identify a subset of hospitalized population with a predisposition for the development of VTE. Design/Method: A retrospective, single-institution, case-control (1:2) study was conducted at Riley Children's Hospital (study period: 2005–2010). Children with VTE were identified using ICD9 codes. Age, sex and disease matched controls were randomly selected from the hospital database. Extensive medical information about the patient demography, underlying disease, characteristics and known risk-factors of VTE was collected from patients' medical-records. Univariate analyses were performed to explore the association between risk factors and VTE. Conditional logistic regression analyses were performed to develop a risk-prediction model. The risk score algorithm was created based on the beta coefficients from the logistic regression model. ROC curves were calculated to evaluate the model performance. Results: A total of 173 cases and 346 controls were included in the study. Prevalence of VTE was 71 cases per 10,000 hospitalized children per year. Individually several of the risk factors were much stronger than others. Involvement of 3 or more systems, previous hospitalization, BMI and hormone therapy were not individually significant, while length of stay at least 7 days, direct admission to the ICU/NICU, central venous line, positive blood stream infection, and prolonged immobilization were significant. Table I shows the multivariate analyses which included only statistically significant risk factors. Because of the varied significance of the individual factors, an analysis was performed to create a weighted score to evaluate risk of VTE. Based on beta-coefficients from a multiple-variable logistic regression model, the risk score was calculated using 2 points each for length of stay at least 7 days, prolonged immobilization, and hormone therapy, and using 1 point each for direct admission to the ICU, presence of a central venous line, and positive bacterial culture. Weighted risk-scores and the corresponding odds of developing VTE are shown in Table II and ROC curves are shown in Figure 1. Conclusion: Our VTE-prediction tool can be helpful to identify children who are at increased risk for development of VTE. Given the low prevalence of VTE, prospective study involving a large sample size is need to clarify the clinical utility of this tool for predicting VTE in hospitalized children. Risk Score = 2*(LOS) + 1*(Admit_ICU) + 1*(CVL) + 1*(Bact_Pos) + 2*(Immo_YN) + 2*(Hormone_BCP) (There is no intercept in the model) Disclosures: No relevant conflicts of interest to declare.

Author(s):  
Masaru Samura ◽  
Naoki Hirose ◽  
Takenori Kurata ◽  
Keisuke Takada ◽  
Fumio Nagumo ◽  
...  

Abstract Background In this study, we investigated the risk factors for daptomycin-associated creatine phosphokinase (CPK) elevation and established a risk score for CPK elevation. Methods Patients who received daptomycin at our hospital were classified into the normal or elevated CPK group based on their peak CPK levels during daptomycin therapy. Univariable and multivariable analyses were performed, and a risk score and prediction model for the incidence probability of CPK elevation were calculated based on logistic regression analysis. Results The normal and elevated CPK groups included 181 and 17 patients, respectively. Logistic regression analysis revealed that concomitant statin use (odds ratio [OR] 4.45, 95% confidence interval [CI] 1.40–14.47, risk score 4), concomitant antihistamine use (OR 5.66, 95% CI 1.58–20.75, risk score 4), and trough concentration (Cmin) between 20 and <30 µg/mL (OR 14.48, 95% CI 2.90–87.13, risk score 5) and ≥30.0 µg/mL (OR 24.64, 95% CI 3.21–204.53, risk score 5) were risk factors for daptomycin-associated CPK elevation. The predicted incidence probabilities of CPK elevation were <10% (low risk), 10%–<25% (moderate risk), and ≥25% (high risk) with the total risk scores of ≤4, 5–6, and ≥8, respectively. The risk prediction model exhibited a good fit (area under the receiving-operating characteristic curve 0.85, 95% CI 0.74–0.95). Conclusions These results suggested that concomitant use of statins with antihistamines and Cmin ≥20 µg/mL were risk factors for daptomycin-associated CPK elevation. Our prediction model might aid in reducing the incidence of daptomycin-associated CPK elevation.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S89-S90
Author(s):  
Ruihong Luo ◽  
Paul Janoian

Abstract Background Fever and leukocytosis are very common in patients with burn injury. Many patients had to do blood cultures frequently during their hospitalization given the concern of bacteremia. We opt to utilize the clinical characters of the patients to evaluate the risk for bacteremia and avoid unnecessary blood culture. Methods The adult patients (≥18 years) with burn injury were selected from the Nationwide Inpatient Sample database (2005–2014). Using ICD-9 codes, we further identified bacteremia, total body surface area (TBSA) of burn, inhalation injury, pneumonia, urinary tract infection, wound infection, escharotomy, placement of central venous line, indwelling urinary catheter, gastrostomy tube (G-tube), intubation, and total parenteral nutrition (TPN). The risk factors for bacteremia were evaluated by Logistic regression. A risk-adjusted model to predict the occurrence of bacteremia was developed by discriminant analysis. Results In total, 241,323 hospitalized patients with burn injury were identified. The incidence of bacteremia was 1.1% (n = 2,634). Comparing with the patients without bacteremia, those with bacteremia were older (51.1 vs. 46.7 year old, P < 0.001), had more severe burn injury (50.7% vs. 12% with burn TBSA over 20%, P < 0.001) and comorbidities (22.7% vs. 14.9% with Charlson index ≥2, P < 0.001), higher in-hospital mortality (5.6% vs. 3.7%, P < 0.001), longer hospital stay (26 vs. 5 days, P < 0.001) and more hospital charges ($206,028 vs. $30,339, P < 0.001). When the age, sex, race, and Charlson index of the patients were adjusted by Logistic regression, it was found that the factors of inhalation injury (OR = 1.25, 95% CI 1.03–1.51), intubation (OR = 1.62, 95% CI 1.44–1.82), TPN (OR = 1.56, 95% CI 1.16–2.11), placement of central venous line (OR = 1.86, 95% 1.57–2.01), and G-tube (OR = 2.04, 95% CI 1.60–2.60) were associated with increased risk for bacteremia. A risk-adjusted model composed of the patient’s age, Charlson index, burn TBSA, inhalation injury, intubation, TPN, placement of central venous line, and G-tube could predict the occurrence of bacteremia with an accurate rate of 85.4% (Table 1). Conclusion The risk factors and risk-adjusted model for bacteremia may assist to decide whether a blood culture is needed in the hospitalized burn patients. Disclosures All authors: No reported disclosures.


2021 ◽  
pp. 0310057X2110242
Author(s):  
Adrian D Haimovich ◽  
Ruoyi Jiang ◽  
Richard A Taylor ◽  
Justin B Belsky

Vasopressors are ubiquitous in intensive care units. While central venous catheters are the preferred route of infusion, recent evidence suggests peripheral administration may be safe for short, single-agent courses. Here, we identify risk factors and develop a predictive model for patient central venous catheter requirement using the Medical Information Mart for Intensive Care, a single-centre dataset of patients admitted to an intensive care unit between 2008 and 2019. Using prior literature, a composite endpoint of prolonged single-agent courses (>24 hours) or multi-agent courses of any duration was used to identify likely central venous catheter requirement. From a cohort of 69,619 intensive care unit stays, there were 17,053 vasopressor courses involving one or more vasopressors that met study inclusion criteria. In total, 3807 (22.3%) vasopressor courses involved a single vasopressor for less than six hours, 7952 (46.6%) courses for less than 24 hours and 5757 (33.8%) involved multiple vasopressors of any duration. Of these, 3047 (80.0%) less than six-hour and 6423 (80.8%) less than 24-hour single vasopressor courses used a central venous catheter. Logistic regression models identified associations between the composite endpoint and intubation (odds ratio (OR) 2.36, 95% confidence intervals (CI) 2.16 to 2.58), cardiac diagnosis (OR 0.72, CI 0.65 to 0.80), renal impairment (OR 1.61, CI 1.50 to 1.74), older age (OR 1.002, Cl 1.000 to 1.005) and vital signs in the hour before initiation (heart rate, OR 1.006, CI 1.003 to 1.009; oxygen saturation, OR 0.996, CI 0.993 to 0.999). A logistic regression model predicting the composite endpoint had an area under the receiver operating characteristic curve (standard deviation) of 0.747 (0.013) and an accuracy of 0.691 (0.012). This retrospective study reveals a high prevalence of short vasopressor courses in intensive care unit settings, a majority of which were administered using central venous catheters. We identify several important risk factors that may help guide clinicians deciding between peripheral and central venous catheter administration, and present a predictive model that may inform future prospective trials.


2021 ◽  
Vol 9 ◽  
Author(s):  
Huanhuan Zhao ◽  
Xiaoyu Zhang ◽  
Yang Xu ◽  
Lisheng Gao ◽  
Zuchang Ma ◽  
...  

Hypertension is a widespread chronic disease. Risk prediction of hypertension is an intervention that contributes to the early prevention and management of hypertension. The implementation of such intervention requires an effective and easy-to-implement hypertension risk prediction model. This study evaluated and compared the performance of four machine learning algorithms on predicting the risk of hypertension based on easy-to-collect risk factors. A dataset of 29,700 samples collected through a physical examination was used for model training and testing. Firstly, we identified easy-to-collect risk factors of hypertension, through univariate logistic regression analysis. Then, based on the selected features, 10-fold cross-validation was utilized to optimize four models, random forest (RF), CatBoost, MLP neural network and logistic regression (LR), to find the best hyper-parameters on the training set. Finally, the performance of models was evaluated by AUC, accuracy, sensitivity and specificity on the test set. The experimental results showed that the RF model outperformed the other three models, and achieved an AUC of 0.92, an accuracy of 0.82, a sensitivity of 0.83 and a specificity of 0.81. In addition, Body Mass Index (BMI), age, family history and waist circumference (WC) are the four primary risk factors of hypertension. These findings reveal that it is feasible to use machine learning algorithms, especially RF, to predict hypertension risk without clinical or genetic data. The technique can provide a non-invasive and economical way for the prevention and management of hypertension in a large population.


2020 ◽  
Vol 48 (5) ◽  
pp. 030006052091922
Author(s):  
Qiao Yang ◽  
Xian Zhong Jiang ◽  
Yong Fen Zhu ◽  
Fang Fang Lv

Objective We aimed to analyze the risk factors and to establish a predictive tool for the occurrence of bloodstream infections (BSI) in patients with cirrhosis. Methods A total of 2888 patients with cirrhosis were retrospectively included. Multivariate analysis for risk factors of BSI were tested using logistic regression. Multivariate logistic regression was validated using five-fold cross-validation. Results Variables that were independently associated with incidence of BSI were white blood cell count (odds ratio [OR] = 1.094, 95% confidence interval [CI] 1.063–1.127)], C-reactive protein (OR = 1.005, 95% CI 1.002–1.008), total bilirubin (OR = 1.003, 95% CI 1.002–1.004), and previous antimicrobial exposure (OR = 4.556, 95% CI 3.369–6.160); albumin (OR = 0.904, 95% CI 0.883–0.926), platelet count (OR = 0.996, 95% CI 0.994–0.998), and serum creatinine (OR = 0.989, 95% CI 0.985–0.994) were associated with lower odds of BSI. The area under receiver operating characteristic (ROC) curve of the risk assessment scale was 0.850, and its sensitivity and specificity were 0.762 and 0.801, respectively. There was no significant difference between the ROC curves of cross-validation and risk assessment. Conclusions We developed a predictive tool for BSI in patients with cirrhosis, which could help with early identification of such episodes at admission, to improve outcome in these patients.


Blood ◽  
2013 ◽  
Vol 122 (21) ◽  
pp. 2957-2957
Author(s):  
Ruchika Goel ◽  
Jessy Dhillon ◽  
Craig Malli ◽  
Kishen Sahota ◽  
Prabhjot Seehra ◽  
...  

Abstract Introduction Venous thromboembolism (VTE) is increasing in children, especially in the tertiary care setting. Hospital-associated VTE (HA-VTE) is a potentially preventable cause of major morbidity and mortality. However, the incidence of HA-VTE VTE is low in children Risk stratification tools may aid in identification of hospitalized high risk pediatric patients who may benefit from VTE prophylaxis. Methods We conducted a case-control study of pediatric patients with HA-VTE (21 years or younger at the time of diagnosis) admitted to the Johns Hopkins Hospital from 2008-2010. Cases were identified using ICD-9 codes for DVT and PE and verified by reviewing hospital records and radiologic imaging reports. HA-VTE was defined as: 1) VTE was diagnosed ≥48 hours after hospital admission without signs/symptoms of VTE on admission, or 2) VTE was diagnosed within 90 days of hospital discharge. Two contemporaneous controls matched for age, sex and admission unit were selected for each case. Records of cases and controls were reviewed for presence of a priori identified putative VTE risk factors at admission. Univariate and conditional multivariable logistic regression analyses with backward elimination were used to develop risk-prediction models. Based on results of univariate analysis, we sought to evaluate two multivariable models, one without length of stay (LOS) with relevance to assessment at admission, and one in which LOS was included with relevance to re-assessment after several days of hospitalization. All variables selected for the multivariable model were tested for interaction with a significance threshold level of p<0.2. Except for this, all hypothesis testing was two tailed and a p value of <0.05 was considered significant. Receiver operator curves (ROC) were constructed using risk factors on multivariate analysis. Results Table 1 lists the results putative risk factors by univariate analysis with a) significantly higher odds of VTE and b) higher odds of VTE but not statistically significant. In multivariable logistic regression analysis, central venous catheter (CVC), VTE predisposition and immobility or LOS >5 days were independently associated with HA-VTE. The combination of CVC and VTE predisposition with either immobility or LOS was predictive of HA-VTE (area under the curve for ROC of 76.6% and 80.6%, Table 2). Conclusion We found independently associated risk factors with that may potentially be used in a predictive model of HA-VTE in children. Further prospective validation studies of these and other risk factors may serve as the basis of future risk-stratified randomized control trials of primary prevention of pediatric HA-VTE. Disclosures: Streiff: Bristol Myers Squibb: Research Funding; Sanofi: Consultancy, Honoraria; Eisai, Daiichi-Sankyo, Boehringer-Ingelheim, Janssen HealthCare: Consultancy. Strouse:NIH: Research Funding; Doris Duke Charitable Foundation: Research Funding; Masimo Corporation: Membership on an entity's Board of Directors or advisory committees, Research Funding. Takemoto:Novonordisk: Research Funding.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246538
Author(s):  
Youngjune Bhak ◽  
Yeonsu Jeon ◽  
Sungwon Jeon ◽  
Changhan Yoon ◽  
Min Kim ◽  
...  

Background The polygenic risk score (PRS) developed for coronary artery disease (CAD) is known to be effective for classifying patients with CAD and predicting subsequent events. However, the PRS was developed mainly based on the analysis of Caucasian genomes and has not been validated for East Asians. We aimed to evaluate the PRS in the genomes of Korean early-onset AMI patients (n = 265, age ≤50 years) following PCI and controls (n = 636) to examine whether the PRS improves risk prediction beyond conventional risk factors. Results The odds ratio of the PRS was 1.83 (95% confidence interval [CI]: 1.69–1.99) for early-onset AMI patients compared with the controls. For the classification of patients, the area under the curve (AUC) for the combined model with the six conventional risk factors (diabetes mellitus, family history of CAD, hypertension, body mass index, hypercholesterolemia, and current smoking) and PRS was 0.92 (95% CI: 0.90–0.94) while that for the six conventional risk factors was 0.91 (95% CI: 0.85–0.93). Although the AUC for PRS alone was 0.65 (95% CI: 0.61–0.69), adding the PRS to the six conventional risk factors significantly improved the accuracy of the prediction model (P = 0.015). Patients with the upper 50% of PRS showed a higher frequency of repeat revascularization (hazard ratio = 2.19, 95% CI: 1.47–3.26) than the others. Conclusions The PRS using 265 early-onset AMI genomes showed improvement in the identification of patients in the Korean population and showed potential for genomic screening in early life to complement conventional risk prediction.


2021 ◽  
Author(s):  
Yixuan He ◽  
Chirag M Lakhani ◽  
Danielle Rasooly ◽  
Arjun K Manrai ◽  
Ioanna Tzoulaki ◽  
...  

OBJECTIVE: <p>Establish a polyexposure score for T2D incorporating 12 non-genetic exposure and examine whether a polyexposure and/or a polygenic risk score improves diabetes prediction beyond traditional clinical risk factors.</p> <h2><a></a>RESEARCH DESIGN AND METHODS:</h2> <p>We identified 356,621 unrelated individuals from the UK Biobank of white British ancestry with no prior diagnosis of T2D and normal HbA1c levels. Using self-reported and hospital admission information, we deployed a machine learning procedure to select the most predictive and robust factors out of 111 non-genetically ascertained exposure and lifestyle variables for the polyexposure risk score (PXS) in prospective T2D. We computed the clinical risk score (CRS) and polygenic risk score (PGS) by taking a weighted sum of eight established clinical risk factors and over six million SNPs, respectively.</p> <h2><a></a>RESULTS:</h2> <p>In the study population, 7,513 had incident T2D. The C-statistics for the PGS, PXS, and CRS models were 0.709, 0.762, and 0.839, respectively. Hazard ratios (HR) associated with risk score values in the top 10% percentile versus the remaining population is 2.00, 5.90, and 9.97 for PGS, PXS, and CRS respectively. Addition of PGS and PXS to CRS improves T2D classification accuracy with a continuous net reclassification index of 15.2% and 30.1% for cases, respectively, and 7.3% and 16.9% for controls, respectively. </p> <h2><a></a>CONCLUSIONS:</h2> <p>For T2D, the PXS provides modest incremental predictive value over established clinical risk factors. The concept of PXS merits further consideration in T2D risk stratification and is likely to be useful in other chronic disease risk prediction models.</p>


2019 ◽  
Vol 41 (7) ◽  
pp. e432-e437
Author(s):  
Aditi Dhir ◽  
Samantha DeMarsh ◽  
Archana Ramgopal ◽  
Sarah Worley ◽  
Moises Auron ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document