Machine learning-based approach to the risk assessment of potentially preventable outpatient cancer treatment-related emergency care and hospitalizations.

2021 ◽  
Vol 39 (28_suppl) ◽  
pp. 333-333
Author(s):  
Kevin Miao ◽  
Justice Dahle ◽  
Sasha Yousefi ◽  
Bilwa Buchake ◽  
Parambir Kaur ◽  
...  

333 Background: Patients undergoing outpatient infusion chemotherapy for cancer are at risk for potentially preventable, unplanned acute care in the form of emergency department (ED) visits and hospital admissions. This can impact outcomes, patient decisions, and costs to the patient and healthcare system. To address this need, the Centers for Medicare & Medicaid Services developed the Chemotherapy Measure (OP-35). Recent randomized controlled data indicate that electronic health record (EHR)-based machine learning (ML) approaches accurately direct supportive care to reduce acute care during radiotherapy. As this may extend to systemic therapy, this study aims to develop and evaluate ML approaches to predict the risk of OP-35 qualifying, potentially preventable acute care within 30 days of infusional systemic therapy. Methods: This study included data from UCSF cancer patients receiving infusional chemotherapy from July 1, 2017, to February 11, 2021, (total 7,068 patients over 84,174 treatments). The data incorporated into the ML included 430 EHR-derived variables, including cancer diagnosis, therapeutic agents, laboratory values, vital signs, medications, and encounter history. Three ML approaches were trained to predict an OP-35 acute care risk following a systemic therapy infusion with least absolute shrinkage selection operator (LASSO), random forest, and gradient boosted trees (GBT; XGBoost) approaches. The models were trained on a subset (75% of patients; before October 12, 2019) of the dataset and validated on a mutually exclusive subset (25% patients; after October 12, 2019) based on the receiver operating characteristic (ROC) curves and calibration plots. Results: There were 1,651 total acute care visits (244 ED visits and 1,407 ED visits converted into hospitalization); 1,310 infusions included a qualifying acute care visit (200 with ED visits only, 0 direct hospital admissions, and 1,110 with both ED visit and hospitalization). Each ML approach demonstrated good performance in the internal validation cohort, with GBT (AUC 0.805) outpacing the random forest (0.750) and LASSO logistic regression (0.755) approaches. Visualization of calibration plots verified concordance between predicted and observed rates of acute care. All three models shared patient age and days elapsed since last treatment as important contributors. Conclusions: EHR-based ML approaches demonstrate high predictive ability for OP-35 qualifying acute care rates on a per-infusion basis, identifying 30-day potentially preventable acute care risk for patients undergoing chemotherapy. Prospective validation of these models is ongoing. Early prediction can facilitate interventional strategies which may reduce acute care, improve health outcomes, and reduce costs.

2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 1511-1511
Author(s):  
Dylan J. Peterson ◽  
Nicolai P. Ostberg ◽  
Douglas W. Blayney ◽  
James D. Brooks ◽  
Tina Hernandez-Boussard

1511 Background: Acute care use is one of the largest drivers of cancer care costs. OP-35: Admissions and Emergency Department Visits for Patients Receiving Outpatient Chemotherapy is a CMS quality measure that will affect reimbursement based on unplanned inpatient admissions (IP) and emergency department (ED) visits. Targeted measures can reduce preventable acute care use but identifying which patients might benefit remains challenging. Prior predictive models have made use of a limited subset of the data available in the Electronic Health Record (EHR). We hypothesized dense, structured EHR data could be used to train machine learning algorithms to predict risk of preventable ED and IP visits. Methods: Patients treated at Stanford Health Care and affiliated community care sites between 2013 and 2015 who met inclusion criteria for OP-35 were selected from our EHR. Preventable ED or IP visits were identified using OP-35 criteria. Demographic, diagnosis, procedure, medication, laboratory, vital sign, and healthcare utilization data generated prior to chemotherapy treatment were obtained. A random split of 80% of the cohort was used to train a logistic regression with least absolute shrinkage and selection operator regularization (LASSO) model to predict risk for acute care events within the first 180 days of chemotherapy. The remaining 20% were used to measure model performance by the Area Under the Receiver Operator Curve (AUROC). Results: 8,439 patients were included, of whom 35% had one or more preventable event within 180 days of starting chemotherapy. Our LASSO model classified patients at risk for preventable ED or IP visits with an AUROC of 0.783 (95% CI: 0.761-0.806). Model performance was better for identifying risk for IP visits than ED visits. LASSO selected 125 of 760 possible features to use when classifying patients. These included prior acute care visits, cancer stage, race, laboratory values, and a diagnosis of depression. Key features for the model are shown in the table. Conclusions: Machine learning models trained on a large number of routinely collected clinical variables can identify patients at risk for acute care events with promising accuracy. These models have the potential to improve cancer care outcomes, patient experience, and costs by allowing for targeted preventative interventions. Future work will include prospective and external validation in other healthcare systems.[Table: see text]


2021 ◽  
Author(s):  
Yiqi Jack Gao ◽  
Yu Sun

The start of 2020 marked the beginning of the deadly COVID-19 pandemic caused by the novel SARS-COV-2 from Wuhan, China. As of the time of writing, the virus had infected over 150 million people worldwide and resulted in more than 3.5 million global deaths. Accurate future predictions made through machine learning algorithms can be very useful as a guide for hospitals and policy makers to make adequate preparations and enact effective policies to combat the pandemic. This paper carries out a two pronged approach to analyzing COVID-19. First, the model utilizes the feature significance of random forest regressor to select eight of the most significant predictors (date, new tests, weekly hospital admissions, population density, total tests, total deaths, location, and total cases) for predicting daily increases of Covid-19 cases, highlighting potential target areas in order to achieve efficient pandemic responses. Then it utilizes machine learning algorithms such as linear regression, polynomial regression, and random forest regression to make accurate predictions of daily COVID-19 cases using a combination of this diverse range of predictors and proved to be competent at generating predictions with reasonable accuracy.


2021 ◽  
Vol 38 (9) ◽  
pp. A5.3-A6
Author(s):  
Thilo Reich ◽  
Adam Bancroft ◽  
Marcin Budka

BackgroundThe recording practices, of electronic patient records for ambulance crews, are continuously developing. South Central Ambulance Service (SCAS) adapted the common AVPU-scale (Alert, Voice, Pain, Unresponsive) in 2019 to include an option for ‘New Confusion’. Progressing to this new AVCPU-scale made comparisons with older data impossible. We demonstrate a method to retrospectively classify patients into the alertness levels most influenced by this update.MethodsSCAS provided ~1.6 million Electronic Patient Records, including vital signs, demographics, and presenting complaint free-text, these were split into training, validation, and testing datasets (80%, 10%, 10% respectively), and under sampled to the minority class. These data were used to train and validate predictions of the classes most affected by the modification of the scale (Alert, New Confusion, Voice).A transfer-learning natural language processing (NLP) classifier was used, using a language model described by Smerity et al. (2017) to classify the presenting complaint free-text.A second approach used vital signs, demographics, conveyance, and assessments (30 metrics) for classification. Categorical data were binary encoded and continuous variables were normalised. 20 machine learning algorithms were empirically tested and the best 3 combined into a voting ensemble combining three vital-sign based algorithms (Random Forest, Extra Tree Classifier, Decision Tree) with the NLP classifier using a Random Forest output layer.ResultsThe ensemble method resulted in a weighted F1 of 0.78 for the test set. The sensitivities/specificities for each of the classes are: 84%/ 90% (Alert), 73%/ 89% (Newly Confused) and 68%/ 93% (Voice).ConclusionsThe ensemble combining free text and vital signs resulted in high sensitivity and specificity when reclassifying the alertness levels of prehospital patients. This study demonstrates the capabilities of machine learning classifiers to recover missing data, allowing the comparison of data collected with different recording standards.


2021 ◽  
Vol 11 (5) ◽  
pp. 343
Author(s):  
Fabiana Tezza ◽  
Giulia Lorenzoni ◽  
Danila Azzolina ◽  
Sofia Barbar ◽  
Lucia Anna Carmela Leone ◽  
...  

The present work aims to identify the predictors of COVID-19 in-hospital mortality testing a set of Machine Learning Techniques (MLTs), comparing their ability to predict the outcome of interest. The model with the best performance will be used to identify in-hospital mortality predictors and to build an in-hospital mortality prediction tool. The study involved patients with COVID-19, proved by PCR test, admitted to the “Ospedali Riuniti Padova Sud” COVID-19 referral center in the Veneto region, Italy. The algorithms considered were the Recursive Partition Tree (RPART), the Support Vector Machine (SVM), the Gradient Boosting Machine (GBM), and Random Forest. The resampled performances were reported for each MLT, considering the sensitivity, specificity, and the Receiving Operative Characteristic (ROC) curve measures. The study enrolled 341 patients. The median age was 74 years, and the male gender was the most prevalent. The Random Forest algorithm outperformed the other MLTs in predicting in-hospital mortality, with a ROC of 0.84 (95% C.I. 0.78–0.9). Age, together with vital signs (oxygen saturation and the quick SOFA) and lab parameters (creatinine, AST, lymphocytes, platelets, and hemoglobin), were found to be the strongest predictors of in-hospital mortality. The present work provides insights for the prediction of in-hospital mortality of COVID-19 patients using a machine-learning algorithm.


Author(s):  
Colin Weaver ◽  
Kerry McBrien ◽  
Tolu Sajobi ◽  
Paul E Ronksley ◽  
Brendan Lethebe ◽  
...  

IntroductionRisk prediction models can be used to inform decision-making in clinical settings. With large and detailed electronic medical record data, machine learning may improve predictions. The objective of this work is to determine the feasibility and accuracy of machine learning versus logistic regression to predict unplanned hospital admissions. Objectives and ApproachData from primary care electronic medical records for community-dwelling adults in Alberta, Canada available from the Canadian Primary Care Sentinel Surveillance Network will be linked to acute care administrative health data held by Alberta Health Services. Two regression methods (forward stepwise logistic, LASSO logistic) will be compared with three machine learning methods (classification tree, random forest, gradient boosted trees). Prior primary and acute care use will be used to predict three outcomes: ≥1 unplanned admission within 1 year, ≥1 unplanned admission within 90 days, and ≥1 unplanned admission within 1 year due to an ambulatory care sensitive condition. ResultsThe results of this work in progress will be presented at the conference. 41,142 patients will have their primary and acute care data linked. We anticipate that the machine learning methods will improve predictive performance but will be more challenging for clinicians and patients to understand, including why a given patient is predicted to be at higher risk. The primary comparison of machine learning and regression methods will be based on positive predictive values corresponding to the top 5% predicted risk threshold, and estimated via 10-fold cross-validation. Conclusion/ImplicationsThis project aims to help researchers decide which statistical methods to use for risk prediction models. When considering machine learning methods the best approach may be to try multiple methods, compare their predictive accuracy and interpretability, and then choose a final method.


Blood ◽  
2020 ◽  
Vol 136 (Supplement 1) ◽  
pp. 6-7
Author(s):  
Jonathan B Hurst ◽  
Richard Gentry Wilkerson

Background: Sickle cell disease (SCD) consists of a group of hemoglobinopathies inherited in an autosomal recessive pattern whereby a single point mutation results in the formation of a hemoglobin protein with altered structure. Many of the complications of SCD and their end organ manifestations are the result of a vaso-occlusive process. These include acute chest syndrome, dactylitis, myocardial infarction, stroke, venous thromboemboli, avascular necrosis, and acute vaso-occlusive episodes (VOEs). VOEs are the most common reason for a patient with SCD to seek medical attention. This care is often provided at an emergency department (ED). It has been well documented that the management of VOEs are often delayed and fail to follow published guidelines. Numerous efforts have been undertaken to ensure appropriate and timely analgesic administration to patients with SCD who are experiencing a VOE. One such intervention is the creation of an infusion center (IC) that has the capability to administer parenteral opioids while avoiding the delays associated with an ED visit. Objectives: This study aims to evaluate the impact of a dedicated IC that was established for the treatment of SCD VOEs. The goal of the IC is to provide timely and appropriate pain management in an effort to reduce ED visits and hospital admissions related to treatment of VOEs in patients with SCD. The IC was available to adult patients with SCD who regularly sought care at our hospital and who did not have a care plan that excluded the administration of parenteral opioids. Methods: This is an observational, retrospective study comparing the rates of hospital utilization before and after the opening of a dedicated IC for patients treated for SCD VOE at a single, urban medical center that regularly provides care for approximately 150 adult patients with SCD. We compared the rates of ED visits, hospital admissions, and length of stay for six months prior and four months following the opening of the IC. Hospital utilization was standardized before and after the intervention using 30-day rates. Additionally, opioid usage, measured in Morphine Equivalent Dose (MED) was compared between the ED and the IC. Results: A total of 12 patients (Table 1) utilized the IC during the 4 months after its opening (6/20/19 - 10/16/19). During this time there were 92 total visits to the IC. Four patients were noted to be high utilizers accounting for 77 (83.7%) of the 92 visits (median = 20 visits, range 12 - 25). The other 8 patients were low utilizers and accounted for 15 (16.3%) of the visits (median = 2, range 1 - 4). Following implementation of the IC, there was found to be a statistically significant decrease in ED visits (pre- = 3.97/30d vs post- = 2.40/30d; p = 0.04) (Table 2, Fig. 1). No significant difference was found in hospital admissions (pre- = 1.47/30d vs post- = 1.17/30d; p = 0.18) or inpatient days (pre- = 6.47/30d vs post- = 5.47/30d; p = 0.23). The total number of acute care visits (sum of ED and IC visits) was found to increase after the opening of the IC, although the change was not statistically significant (pre- = 3.97/30d vs post- = 5.47/30d; p = 0.07). The change in acute care visits was largely driven by an increase in visits from the high utilizers (pre- = 2.00/30d vs post- = 3.53/30d; p = 0.05). In terms of parenteral opioid administration, there was a statistically significant decrease in amount of opioids given in the IC compared to the ED (ED = 251.64 MED vs IC = 177.17 MED; p = 0.04), although this was only seen in the low utilizer group (Table 3, Fig. 2). There was no significant difference in opioid doses received for the high utilizers (ED = 256.31 MED vs IC = 272.12 MED; p = 0.24) and for the group as a whole (ED = 253.34 MED vs IC = 208.84 MED; p = 0.10). Conclusion: The introduction of an IC for the management of SCD VOE led to a significant decrease ED visits but also led to an increase in overall acute care visits, although this was not statistically significant. This increase was largely driven by a subset of this population considered high utilizers. Additionally, the use of the IC was not associated with a decrease in the total amount of parenteral opioids that were administered. However, for the low utilizer group there was a decrease in parenteral opioid administration. The IC did not reduce admissions and duration of hospitalization in this population. Overall, the IC had variable success and further refinement of how it is used should be undertaken to ensure quality care for patients with SCD. Disclosures No relevant conflicts of interest to declare.


CJEM ◽  
2016 ◽  
Vol 18 (S1) ◽  
pp. S46-S46
Author(s):  
G. Innes ◽  
J. Andruchow ◽  
A. McRae ◽  
T. Junghans ◽  
E. Lang

Introduction: Most patients with acute renal colic are discharged from the ED after initial diagnosis and symptom control, but 20-30% require repeat ED visits for ongoing pain, and 15-25% require rescue intervention (ureteroscopic intervention or lithotripsy). If patients destined for failure of outpatient management could be identified based on information available during their ED visits, they could be prioritized early for intervention to reduce short term pain and disability. Our objective was to identify predictors of outpatient treatment failure, defined as the need for hospitalization or rescue intervention within 60 days of ED discharge. Methods: We collated prospectively gathered administrative data from all Calgary region patients with an ED diagnosis of renal colic over a one-year period. Demographics, arrival mode, triage category, vital signs, pain scores, analgesic use and ED disposition were recorded. Research assistants reviewed imaging reports and documented stone characteristics. These data were linked with regional hospital databases to identify ED revisits, hospital admissions, and surgical procedures. The primary outcome was hospitalization or rescue intervention within 60 days of ED discharge. Results: Of 3104 patients with first ED visit for acute renal colic, 1296 had CT or US imaging and were discharged without intervention. Median age was 50 years and 69% were male. 325 patients (25.1%) required an ED re-visit and 11.8% required admission or rescue intervention. Patients with small (<5mm), medium (5-7mm) and large (>7mm) stones failed in 9.0%, 14.4% and 9.9% of cases respectively. The only factor predictive of treatment failure in multivariable models was stone position in the proximal or mid-ureter. Age, sex, vital signs, pain score, WBC, creatinine, history of prior stone or intervention, stone side, stone size, presence of stranding and degree of hydronephrosis were not associated with outpatient failure. Conclusion: Outpatient treatment failure could not be predicted based on any of the predictors studied.


2021 ◽  
Author(s):  
Yuliya Pinevich ◽  
Adam Amos-Binks ◽  
Christie S Burris ◽  
Gregory Rule ◽  
Marija Bogojevic ◽  
...  

ABSTRACT Objectives The objectives of this study were to test in real time a Trauma Triage, Treatment, and Training Decision Support (4TDS) machine learning (ML) model of shock detection in a prospective silent trial, and to evaluate specificity, sensitivity, and other estimates of diagnostic performance compared to the gold standard of electronic medical records (EMRs) review. Design We performed a single-center diagnostic performance study. Patients and setting A prospective cohort consisted of consecutive patients aged 18 years and older who were admitted from May 1 through September 30, 2020 to six Mayo Clinic intensive care units (ICUs) and five progressive care units. Measurements and main results During the study time, 5,384 out of 6,630 hospital admissions were eligible. During the same period, the 4TDS shock model sent 825 alerts and 632 were eligible. Among 632 hospital admissions with alerts, 287 were screened positive and 345 were negative. Among 4,752 hospital admissions without alerts, 78 were screened positive and 4,674 were negative. The area under the receiver operating characteristics curve for the 4TDS shock model was 0.86 (95% CI 0.85-0.87%). The 4TDS shock model demonstrated a sensitivity of 78.6% (95% CI 74.1-82.7%) and a specificity of 93.1% (95% CI 92.4-93.8%). The model showed a positive predictive value of 45.4% (95% CI 42.6-48.3%) and a negative predictive value of 98.4% (95% CI 98-98.6%). Conclusions We successfully validated an ML model to detect circulatory shock in a prospective observational study. The model used only vital signs and showed moderate performance compared to the gold standard of clinician EMR review when applied to an ICU patient cohort.


2020 ◽  
Author(s):  
Chunfang Kong ◽  
Kai Xu ◽  
Junzuo Wang ◽  
Chonglong Wu ◽  
Gang Liu

Abstract. Regarding the ever increasing and frequent occurrence of serious landslide disaster in eastern Guangxi, the current study were implemented to adopt support vector machines (SVM), particle swarm optimization support vector machines (PSO-SVM), random forest (RF), and particle swarm optimization random forest (PSO-RF) methods to assess landslide susceptibility by Zhaoping County. To this end, 10 landslide disaster-related causal variables including digital elevation model (DEM)-derived, meteorology-derived, Landsat8-derived, geology-derived, and human activities factors were selected for running four machine-learning (ML) methods, and landslide susceptibility evaluation maps were produced. Then, receiver operating characteristics (ROC) curves, and field investigation were performed to verify the efficiency of these models. Analysis and comparison of the results denoted that all four ML models performed well for the landslide susceptibility evaluation as indicated by the values of ROC curves – from 0.863 to 0.934. Moreover, the results also indicated that the PSO algorithm has a good effect on SVM and FR models. In addition, such a result also revealed that the PSO-RF and PSO-SVM models have the strong robustness and stable performance, and those two models are promising methods that could be transferred to other regions for landslide susceptibility evaluation.


Blood ◽  
2019 ◽  
Vol 134 (Supplement_1) ◽  
pp. 999-999
Author(s):  
Romy Carmen Lawrence ◽  
Sarah L Khan ◽  
Vishal Gupta ◽  
Brittany Scarpato ◽  
Rachel Strykowski ◽  
...  

Introduction Patients with sickle cell disease (SCD) are at increased risk for venous thromboembolism (VTE). By age 40, 11-12% of SCD patients have experienced a VTE. VTE confers nearly a three-fold increase in mortality risk for individuals with SCD. We hypothesized that VTE increases subsequent SCD severity which may increase acute care utilization. We investigated the association between VTE and rates of vaso-occlusive events (VOE) and acute care utilization for individuals with SCD. Methods We performed a retrospective longitudinal chart review of 239 adults with SCD who received care at our institution between 2003 and 2018. VTE was defined as deep venous thrombosis (DVT) diagnosed by Duplex ultrasound or pulmonary embolism (PE) diagnosed by either ventilation-perfusion scanning or computed tomography angiography. Medical histories, laboratories and medication use for all subjects were obtained. For VTE patients, clinical data for 1- and 5- years post-VTE were obtained and compared to 1 year prior to the VTE. For non-VTE patients, data were obtained at baseline and compared to five years later. We evaluated all acute care visits for the presence of a SCD-related problem, specifically assessing if a VOE or acute chest syndrome (ACS) occurred. We calculated rates of VOE, ACS, Emergency Department (ED) visits and hospitalizations prior to and subsequent to a VTE and compared these to occurrence rates among those without VTE. Data were analyzed using Stata 14.2. Results In our cohort of 239 individuals with SCD, 153 (64%) were HbSS/HbSβ0 and 127(53%) were female. Fifty-six individuals (23%) had a history of VTE; 20 had a DVT (36%), 33 had a PE (59%), and 3 had both (5%). Patients with VTE had a higher frequency of prior history of ACS (p&lt;0.001), stroke (p=0.013), splenectomy (p=0.033), and avascular necrosis (p&lt;0.001) than those without a VTE. Prior to their VTE, these patients had higher white blood cell (11.8 x103 [9-15 x 103] vs 9.7 x103 [7-12 x 103], p=0.047) and platelet counts (378 x 103 [272-485 x 103] vs 322 x 103 [244-400 x 103], p=0.007) than those without a VTE. During five years of follow-up after a VTE, these patients had 6.32 (SD 14.97) ED visits per year compared to 2.84 (SD 5.93, p&lt;0.03) ED visits per year in those without a VTE. Ninety two percent of these ED visits were SCD-related; 73% were for VOE and 4% for ACS. Additionally, SCD patients with a VTE had an increase in all-cause hospital admissions (2.84 [SD 3.26] vs 1.43 [SD 2.86], p=0.003) and SCD-related hospital admissions (2.61 [SD 3.13] vs 1.23 [SD 2.74], p=0.001) per year compared to those without VTE. Conclusion VTE is a frequent complication in patients with SCD. Our study suggests that patients who experience a VTE have greater SCD severity as evidenced by increased VOE, ED and hospital utilization. These data suggest that VTE is not merely an isolated event in SCD patients and that it may either serve as an indicator of disease severity or contribute to overall disease pathophysiology. Disclosures Sloan: Abbvie: Other: Endpoint Review Committee; Stemline: Consultancy; Merck: Other: endpoint review commitee.


Sign in / Sign up

Export Citation Format

Share Document