Employing electronic health record data to predict risk of emergency department visits for new patients.

2018 ◽  
Vol 36 (30_suppl) ◽  
pp. 314-314 ◽  
Author(s):  
Robert Michael Daly ◽  
Dmitriy Gorenshteyn ◽  
Lior Gazit ◽  
Stefania Sokolowski ◽  
Kevin Nicholas ◽  
...  

314 Background: Acute care accounts for half of cancer expenditures and is a measure of poor quality care. Identifying patients at high risk for ED visits enables institutions to target symptom management resources to those most likely to benefit. Risk stratification models developed to date have not been meaningfully employed in oncology, and there is a need for clinically relevant models to improve patient care. Methods: We established a predictive analytics framework for clinical use with attention to the modeling technique, clinician feedback, and application metrics. The model employs EHR data from initial visit to first antineoplastic administration for new patients at our institution from January 2014 to June 2017. The binary dependent variable is occurrence of an ED visit within the first 6 months of treatment. From over 1,400 data features, the model was refined to include 400 clinically relevant ones spanning demographics, pathology, clinician notes, labs, medications, and psychosocial information. Clinician review was performed to confirm EHR data input validity. The final regularized multivariate logistic regression model was chosen based on clinical and statistical significance. Parameter selection and model evaluation utilized the positive predictive value for the top 25% of observations ranked by model-determined risk. The final model was evaluated using a test set containing 20% of randomly held out data. The model was calibrated based on a 5-fold cross-validation scheme over the training set. Results: There are 5,752 antineoplastic starts in our training set, and 1,457 in our test set. The positive predictive value of this model for the top 25% riskiest new start antineoplastic patients is 0.53. The 400 clinically relevant features draw from multiple areas in the EHR. For example, those features found to increase risk include: combination chemotherapy, low albumin, social work needs, and opioid use, whereas those found to decrease risk include stage 1 disease, never smoker status, and oral antineoplastic therapy. Conclusions: We have constructed a framework to build a clinically relevant model. We are now piloting it to identify those likely to benefit from a home-based, digital symptom management intervention.

2018 ◽  
Vol 36 (34_suppl) ◽  
pp. 144-144
Author(s):  
Robert Michael Daly ◽  
Dmitriy Gorenshteyn ◽  
Lior Gazit ◽  
Stefania Sokolowski ◽  
Kevin Nicholas ◽  
...  

144 Background: Acute care accounts for half of cancer expenditures and is a measure of poor quality care. Identifying patients at high risk for ED visits enables institutions to target symptom management resources to those most likely to benefit. Risk stratification models developed to date have not been meaningfully employed in oncology, and there is a need for clinically relevant models to improve patient care. Methods: We established a predictive analytics framework for clinical use with attention to the modeling technique, clinician feedback, and application metrics. The model employs EHR data from initial visit to first antineoplastic administration for new patients at our institution from January 2014 to June 2017. The binary dependent variable is occurrence of an ED visit within the first 6 months of treatment. From over 1,400 data features, the model was refined to include 400 clinically relevant ones spanning demographics, pathology, clinician notes, labs, medications, and psychosocial information. Clinician review was performed to confirm EHR data input validity. The final regularized multivariate logistic regression model was chosen based on clinical and statistical significance. Parameter selection and model evaluation utilized the positive predictive value for the top 25% of observations ranked by model-determined risk. The final model was evaluated using a test set containing 20% of randomly held out data. The model was calibrated based on a 5-fold cross-validation scheme over the training set. Results: There are 5,752 antineoplastic starts in our training set, and 1,457 in our test set. The positive predictive value of this model for the top 25% riskiest new start antineoplastic patients is 0.53. The 400 clinically relevant features draw from multiple areas in the EHR. For example, those features found to increase risk include: combination chemotherapy, low albumin, social work needs, and opioid use, whereas those found to decrease risk include stage 1 disease, never smoker status, and oral antineoplastic therapy. Conclusions: We have constructed a framework to build a clinically relevant model. We are now piloting it to identify those likely to benefit from a home-based, digital symptom management intervention.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. 6554-6554
Author(s):  
Robert Michael Daly ◽  
Dmitriy Gorenshteyn ◽  
Lior Gazit ◽  
Stefania Sokolowski ◽  
Kevin Nicholas ◽  
...  

6554 Background: Acute care accounts for half of cancer expenditures and is a measure of poor quality care. Identifying patients at high risk for emergency department (ED) visits enables institutions to target resources to those most likely to benefit. Risk stratification models developed to date have not been meaningfully employed in oncology, and there is a need for clinically relevant models to improve patient care. Methods: We established and applied a predictive framework for clinical use with attention to modeling technique, clinician feedback, and application metrics. The model employs electronic health record data from initial visit to first antineoplastic administration for patients at our institution from January 2014 to June 2017. The binary dependent variable is occurrence of an ED visit within the first 6 months of treatment. The final regularized multivariable logistic regression model was chosen based on clinical and statistical significance. In order to accommodate for the needs to the program, parameter selection and model calibration were optimized to suit the positive predictive value of the top 25% of observations as ranked by model-determined risk. Results: There are 5,752 antineoplastic administration starts in our training set, and 1,457 in our test set. The positive predictive value of this model for the top 25% riskiest new start antineoplastic patients is 0.53. From over 1,400 data features, the model was refined to include 400 clinically relevant ones spanning demographics, pathology, clinician notes, labs, medications, and psychosocial information. At the patient level, specific features determining risk are surfaced in a web application, RiskExplorer, to enable clinician review of individual patient risk. This physician facing application provides the individual risk score for the patient as well as their quartile of risk when compared to the population of new start antineoplastic patients. For the top quartile of patients, the risk for an ED visit within the first 6 months of treatment is greater than or equal to 49%. Conclusions: We have constructed a framework to build a clinically relevant risk model. We are now piloting it to identify those likely to benefit from a home-based, digital symptom management intervention.


Circulation ◽  
2019 ◽  
Vol 140 (11) ◽  
pp. 899-909 ◽  
Author(s):  
Martin P. Than ◽  
John W. Pickering ◽  
Yader Sandoval ◽  
Anoop S.V. Shah ◽  
Athanasios Tsanas ◽  
...  

Background: Variations in cardiac troponin concentrations by age, sex, and time between samples in patients with suspected myocardial infarction are not currently accounted for in diagnostic approaches. We aimed to combine these variables through machine learning to improve the assessment of risk for individual patients. Methods: A machine learning algorithm (myocardial-ischemic-injury-index [MI 3 ]) incorporating age, sex, and paired high-sensitivity cardiac troponin I concentrations, was trained on 3013 patients and tested on 7998 patients with suspected myocardial infarction. MI 3 uses gradient boosting to compute a value (0–100) reflecting an individual’s likelihood of a diagnosis of type 1 myocardial infarction and estimates the sensitivity, negative predictive value, specificity and positive predictive value for that individual. Assessment was by calibration and area under the receiver operating characteristic curve. Secondary analysis evaluated example MI 3 thresholds from the training set that identified patients as low risk (99% sensitivity) and high risk (75% positive predictive value), and performance at these thresholds was compared in the test set to the 99th percentile and European Society of Cardiology rule-out pathways. Results: Myocardial infarction occurred in 404 (13.4%) patients in the training set and 849 (10.6%) patients in the test set. MI 3 was well calibrated with a very high area under the receiver operating characteristic curve of 0.963 [0.956–0.971] in the test set and similar performance in early and late presenters. Example MI 3 thresholds identifying low- and high-risk patients in the training set were 1.6 and 49.7, respectively. In the test set, MI 3 values were <1.6 in 69.5% with a negative predictive value of 99.7% (99.5–99.8%) and sensitivity of 97.8% (96.7–98.7%), and were ≥49.7 in 10.6% with a positive predictive value of 71.8% (68.9–75.0%) and specificity of 96.7% (96.3–97.1%). Using these thresholds, MI 3 performed better than the European Society of Cardiology 0/3-hour pathway (sensitivity, 82.5% [74.5–88.8%]; specificity, 92.2% [90.7–93.5%]) and the 99th percentile at any time point (sensitivity, 89.6% [87.4–91.6%]); specificity, 89.3% [88.6–90.0%]). Conclusions: Using machine learning, MI 3 provides an individualized and objective assessment of the likelihood of myocardial infarction, which can be used to identify low- and high-risk patients who may benefit from earlier clinical decisions. Clinical Trial Registration: URL: https://www.anzctr.org.au . Unique identifier: ACTRN12616001441404.


2020 ◽  
Vol 41 (S1) ◽  
pp. s39-s39
Author(s):  
Pontus Naucler ◽  
Suzanne D. van der Werff ◽  
John Valik ◽  
Logan Ward ◽  
Anders Ternhag ◽  
...  

Background: Healthcare-associated infection (HAI) surveillance is essential for most infection prevention programs and continuous epidemiological data can be used to inform healthcare personal, allocate resources, and evaluate interventions to prevent HAIs. Many HAI surveillance systems today are based on time-consuming and resource-intensive manual reviews of patient records. The objective of HAI-proactive, a Swedish triple-helix innovation project, is to develop and implement a fully automated HAI surveillance system based on electronic health record data. Furthermore, the project aims to develop machine-learning–based screening algorithms for early prediction of HAI at the individual patient level. Methods: The project is performed with support from Sweden’s Innovation Agency in collaboration among academic, health, and industry partners. Development of rule-based and machine-learning algorithms is performed within a research database, which consists of all electronic health record data from patients admitted to the Karolinska University Hospital. Natural language processing is used for processing free-text medical notes. To validate algorithm performance, manual annotation was performed based on international HAI definitions from the European Center for Disease Prevention and Control, Centers for Disease Control and Prevention, and Sepsis-3 criteria. Currently, the project is building a platform for real-time data access to implement the algorithms within Region Stockholm. Results: The project has developed a rule-based surveillance algorithm for sepsis that continuously monitors patients admitted to the hospital, with a sensitivity of 0.89 (95% CI, 0.85–0.93), a specificity of 0.99 (0.98–0.99), a positive predictive value of 0.88 (0.83–0.93), and a negative predictive value of 0.99 (0.98–0.99). The healthcare-associated urinary tract infection surveillance algorithm, which is based on free-text analysis and negations to define symptoms, had a sensitivity of 0.73 (0.66–0.80) and a positive predictive value of 0.68 (0.61–0.75). The sensitivity and positive predictive value of an algorithm based on significant bacterial growth in urine culture only was 0.99 (0.97–1.00) and 0.39 (0.34–0.44), respectively. The surveillance system detected differences in incidences between hospital wards and over time. Development of surveillance algorithms for pneumonia, catheter-related infections and Clostridioides difficile infections, as well as machine-learning–based models for early prediction, is ongoing. We intend to present results from all algorithms. Conclusions: With access to electronic health record data, we have shown that it is feasible to develop a fully automated HAI surveillance system based on algorithms using both structured data and free text for the main healthcare-associated infections.Funding: Sweden’s Innovation Agency and Stockholm County CouncilDisclosures: None


2011 ◽  
Vol 26 (S2) ◽  
pp. 1649-1649
Author(s):  
J. Stefansson ◽  
P. Nordström ◽  
J. Jokinen

ObjectiveTo assess the predictive value of the Suicide Intent Scale in patients with a high suicide risk. The secondary aim was to assess if the use of the factors of the Suicide Intent Scale would offer a better predictive value in case detection. Finally a short version of the scale was created after an item analysis.MethodEighty-one suicide attempters were assessed with the Beck‘s Suicide Intent Scale (SIS). All patients were followed up for cause of death. Receiver-operating characteristic (ROC) curves and tables were created to establish the optimal cut-off values for SIS and SIS factors to predict suicide.ResultsSeven patients committed suicide during a mean follow up of 9.5 years. The major finding was that mean SIS distinguished between suicides and survivors. The positive predictive value was 16.7% and the AUC was 0.74. Only the planning subscale reached the statistical significance. Four items were used to test a short version of SIS in the suicide prediction. The positive predictive value was 19% and the AUC was 0.82.ConclusionsThe Suicide Intent Scale is a valuable tool in clinical suicide risk assessment, a short version of the scale may offer a better predictive value.


Blood ◽  
2007 ◽  
Vol 110 (11) ◽  
pp. 3304-3304
Author(s):  
Aaron D. Viny ◽  
Hemant Ishwaran ◽  
Andrew Dunbar ◽  
Bartlomiej Przychodzen ◽  
Thomas Loughran ◽  
...  

Abstract Large granular lymphocyte leukemia (LGL) is a disease of semiautonomous proliferation of cytotoxic T-cells (CTL) often accompanied by immune cytopenias, particularly neutropenia. LGL related cytopenias have been attributed to LGL cellular cytotoxicity or proapoptotic cytokines rather than intrinsic properties of the neutrophils. The association of LGL with autoimmunity suggests that genetic predisposition may contribute to disease pathogenesis. We studied 69 patients with LGL leukemia using a case-control approach; control populations included ethnically matched healthy individuals (N=82) and disease controls of aplastic anemia (N=48) and kidney transplant recipients (N=48). Initially, we applied the Illumina 12K non-synonymous SNP array to a subcohort of 36 LGL patients and 54 healthy controls (training set). Results were subjected to independent hypothesis-generating biostatistical algorithms. First, Exemplar automated analysis determined disease prediction based on independent χ2 analysis for each SNP. As expected, no SNP in this underpowered study reached Bonferroni corrected statistical significance, but our analysis allowed for ranking based on p-value. Second, Random Forests, a nonparametric tree method was applied, whereby all SNP information was calculated multivariately to predict disease. In a non-Mendelian inherited disease, this method more closely reflects the biology of complex polygenic traits; remarkably, those SNP identified by Random Forest were among the highest ranking SNP by Exemplar. Our initial hypothesis-generating set identified 1 SNP in unknown gene C8orf31 and 4 SNP within the MHC class I related-chain A (MICA) gene. We focused on MICA, a non-peptide presenting, tightly regulated stress response HLA molecule that could play a role in pathogenesis of neutropenia in LGL. To further substantiate our finding, the initial training set results were subjected to technical validation; fidelity was rechecked by PCR genotyping with 93% concordance. Biological validation was determined by confirmation in an independent test set consisting of 33 LGL patients and additional 28 controls. As only limited numbers of SNP were tested, there was no need for α-error adjustment. MICA SNP rs1063635 was found to have the most predictive value in both the training set (PPV=56%, NPV=89%) and test set (PPV=64%, NPV=86%). Overall, the control frequency of this SNP in homozygous form was 12% vs 60% in LGL (p<.01, OR=9.1). MICA alleles have been implicated in autoimmune diseases and malignancies. Although this SNP may not define a particular MICA genotype, it is possible that it is in linkage disequilibrium with genotype-defining polymorphisms. To study the functional consequences of our findings, flow cytometric analysis using anti-MICA antibodies was performed, which identified higher expression of MICA in neutrophils from patients as compared to controls (p=.04). MICA overexpression decreased after immunosuppressive therapy (p<.01). While the mechanism of MICA induction is unknown, we stipulate that the presence of MICA alleles leads to a persistent stimulatory signal in LGL predisposing to clonal outgrowth. In sum, our findings suggest that MICA polymorphisms may represent a predisposition factor in LGL and/or LGL-associated neutropenia.


Author(s):  
Lusha W. Liang ◽  
Michael A. Fifer ◽  
Kohei Hasegawa ◽  
Mathew S. Maurer ◽  
Muredach P. Reilly ◽  
...  

Background - Genetic testing can determine family screening strategies and has prognostic and diagnostic value in hypertrophic cardiomyopathy (HCM). However, it can also pose a significant psychosocial burden. Conventional scoring systems offer modest ability to predict genotype positivity. The aim of our study was to develop a novel prediction model for genotype positivity in patients with HCM by applying machine learning (ML) algorithms. Methods - We constructed three ML models using readily available clinical and cardiac imaging data of 102 patients from Columbia University with HCM who had undergone genetic testing (the training set). We validated model performance on 76 patients with HCM from Massachusetts General Hospital (the test set). Within the test set, we compared the area under the receiver operating characteristic curves (AUCs) for the ML models against the AUCs generated by the Toronto HCM Genotype Score ("the Toronto score") and Mayo HCM Genotype Predictor ("the Mayo score") using the Delong test and net reclassification improvement (NRI). Results - Overall, 63 of the 178 patients (35%) were genotype positive. The random forest ML model developed in the training set demonstrated an AUC of 0.92 (95% CI 0.85-0.99) in predicting genotype positivity in the test set, significantly outperforming the Toronto score (AUC 0.77, 95% CI 0.65-0.90, p=0.004, NRI: p<0.001) and the Mayo score (AUC 0.79, 95% CI 0.67-0.92, p=0.01, NRI: p=0.001). The gradient boosted decision tree ML model also achieved significant NRI over the Toronto score (p<0.001) and the Mayo score (p=0.03), with an AUC of 0.87 (95% CI 0.75-0.99). Compared to the Toronto and Mayo scores, all three ML models had higher sensitivity, positive predictive value, and negative predictive value. Conclusions - Our ML models demonstrated a superior ability to predict genotype positivity in patients with HCM compared to conventional scoring systems in an external validation test set.


EP Europace ◽  
2021 ◽  
Vol 23 (Supplement_3) ◽  
Author(s):  
A Badiul ◽  
C Iorgulescu ◽  
S Bogdan ◽  
A Radu ◽  
S Paja ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: None. Introduction   Catheter ablation of accessory pathways (AP) located in the posterior pyramidal space are often challenging due to its anatomical complexity. Scarce of data are available about the ECG features that might indicate when an epicardial approach is required in the ablation of the posteroseptal AP. Objective The purpose of this retrospective study was to describe the electrocardiographic features of posteroseptal AP which have been successfully ablated with epicardial approach and identify electrocardiographic predictors for epicardial AP location. Methods The 12 leads ECG of 75 patients with posteroseptal accessory pathways who were successfully ablated were retrospectively analyzed. ECG features for epicardial location described already in published studies have been considered (negative delta wave in DII, positive delta wave in aVR, high amplitude S wave in V6). Additionally the characteristics of the initial 40 ms of the delta wave in lead V1 (measured from the earliest QRS deflection in 12 leads) during full pre-excitation have been investigated. Results Of 75 patients with posteroseptal AP that undergone catheter ablation, 40 (53.3%) had successful epicardial ablation. An initial isoelectric or biphasic delta wave in lead V1 proved the highest sensitivity (82.5 %) respectively positive predictive value (97 %) and  specificity (97 %) for an epicardial location of the AP. Deep S wave in V6 proved lower sensitivity (37.5%) and positive predictive value (68%) but higher specificity (80%) for epicardial location of AP. The specificity and sensitivity for epicardial location of AP of negative delta wave in DII were lower, however it failed to reach statistical significance.  Conclusion This study shows that an initially isoelectric or biphasic delta wave aspect in lead V1 has a higher specificity, sensitivity and positive predictive value than previously described ECG markers for epicardial location of posteroseptal accessory pathways.


2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S264-S264
Author(s):  
Yesha Malik ◽  
Amy Dupper ◽  
Jaclyn Cusumano ◽  
Dhruv Patel ◽  
Kathryn Twyman ◽  
...  

Abstract Background Candidemia is a rare but serious complication of SARS-CoV-2 hospitalization. Combining non-culture and culture-based diagnostics allows earlier identification of candidemia. Given higher reported incidence during COVID-19 surges, we investigated the use of (1-3)-β-D-glucan (BDG) assay at our institution in those who did and did not develop candidemia. Methods Retrospective study of adults admitted to The Mount Sinai Hospital between March 15-June 30 2020 for SARS-CoV-2 infection, with either ≥1 BDG assay or positive fungal blood culture. Data was collected with the electronic medical record and Vigilanz. A BDG value ≥ 80 was used as a positivity cutoff. Differences in mortality were assessed by univariate logistic regression using R (version 4.0.0). Statistical significance was measured by P value &lt; .05. Results There were 75 patients with ≥1 BDG assay resulted and 28 patients with candidemia, with an overlap of 9 between the cohorts. Among the 75 who had BDG assay, 23 resulted positive and 52 negative. Nine of 75 patients developed candidemia. Of the 23 with a positive assay, 5 developed candidemia and 18 did not. Seventeen of the 18 had blood cultures drawn within 7 days +/- of BDG assay. Four patients with candidemia had persistently negative BDG; 2 had cultures collected within 7 days +/- of BDG assay. With a cut-off of &gt;80, the negative predictive value (NPV) was 0.92. When the cut-off increased to &gt;200, NPV was 0.97 and positive predictive value (PPV) was 0.42. Average antifungal days in patients with negative BDG was 2.6 vs. 4.2 in those with a positive. Mortality was 74% in those with ≥1 positive BDG vs. 50% in those with persistently negative BDGs. There was a trend towards higher odds of death in those with positive BDG (OR = 2.83, 95% CI: 1.00-8.90, p &lt; 0.06). Conclusion There was substantial use of BDG to diagnose candidemia at the peak of the COVID-19 pandemic. Blood cultures were often drawn at time of suspected candidemia but not routinely. When cultures and BDG were drawn together, BDG had a high NPV but low PPV. High NPV of BDG likely contributed to discontinuation of empiric antifungals. The candidemic COVID-19 patients had high mortality, so further investigation of algorithms for the timely diagnosis of candidemia are needed to optimize use of antifungals while improving mortality rates. Disclosures All Authors: No reported disclosures


2020 ◽  
Vol 4 (Supplement_1) ◽  
Author(s):  
David Tyler Broome ◽  
Robert Naples ◽  
Richard Bailey ◽  
James F Bena ◽  
Joseph Scharpf ◽  
...  

Abstract Primary hyperparathyroidism is characterized by excessive dysregulated production of parathyroid hormone (PTH) by 1 or more abnormal parathyroid glands. Preoperative localization is important for surgical planning in primary hyperparathyroidism. Previously, it had been published that ultrasound (sensitivity of 76.1%, positive predictive value of 93.2%) and nuclear scintigraphy (Sestamibi-SPECT) (sensitivity of 78.9%, and a positive predictive value of 90.7%) are first line imaging modalities1. Currently, the imaging modality of choice varies according to region and institutional protocol. The aim of this study was to evaluate the imaging modality that is associated with an improved remission rate based on concordance with operative findings. A secondary aim was to determine the effect of additive imaging on remission rates. This was an IRB-approved retrospective review of 2657 patients with primary hyperparathyroidism undergoing surgery at a tertiary referral center from 2004–2017. Analyses were performed with SAS software using a 95% confidence interval (p&lt;0.05) for statistical significance. After excluding re-operative and familial cases, 2079 patients met study criteria. There were 422 (20.3%) male and 1657 (79.7%) female patients with a mean age of 66 (+12.2) years, of which 1723 (82.9%) of patients were white and 294 (14.1%) patients were black. Ultrasound (US) was performed in 1891 (91.9%), sestamibi with SPECT (sestamibi/SPECT) in 1945 (93.6%), and CT in 98 (4.7%) patients. Of these, 1721 (82.8%) had combined US and sestamibi/SPECT. US was surgeon-performed in 94.2% of cases and 89.9% of the patients underwent a four gland exploration. Overall, US concordance was 52.4%, sestamibi/SPECT was 45.5%, and CT was 45.9%.US and sestamibi/SPECT both had an improved remission rate if concordant with operative findings, while CT had no effect (US p=0.04; sestamibi/SPECT p=0.01; CT p=0.50). The overall remission rate was 94% (CI=0.93–0.95), however, increasing the number of imaging modalities performed did not increase the remission rate (p=0.76) or concordance with operative findings (p=0.05). Despite having low concordance rates, US and sestamibi/SPECT that agreed with operative findings were associated with higher remission rates. Therefore, when imaging is to be used for localization, our data support the use of US and sestamibi/SPECT as the initial imaging modalities of choice for preoperative localization. 1Kuzminski SJ, Sosa JA, Hoang JK. Update in Parathyroid Imaging. Magn Reson Imaging Clin N Am. 2018;26(1): 151–166.


Sign in / Sign up

Export Citation Format

Share Document