A framework for building a clinically relevant risk model.

2019 ◽  
Vol 37 (15_suppl) ◽  
pp. 6554-6554
Author(s):  
Robert Michael Daly ◽  
Dmitriy Gorenshteyn ◽  
Lior Gazit ◽  
Stefania Sokolowski ◽  
Kevin Nicholas ◽  
...  

6554 Background: Acute care accounts for half of cancer expenditures and is a measure of poor quality care. Identifying patients at high risk for emergency department (ED) visits enables institutions to target resources to those most likely to benefit. Risk stratification models developed to date have not been meaningfully employed in oncology, and there is a need for clinically relevant models to improve patient care. Methods: We established and applied a predictive framework for clinical use with attention to modeling technique, clinician feedback, and application metrics. The model employs electronic health record data from initial visit to first antineoplastic administration for patients at our institution from January 2014 to June 2017. The binary dependent variable is occurrence of an ED visit within the first 6 months of treatment. The final regularized multivariable logistic regression model was chosen based on clinical and statistical significance. In order to accommodate for the needs to the program, parameter selection and model calibration were optimized to suit the positive predictive value of the top 25% of observations as ranked by model-determined risk. Results: There are 5,752 antineoplastic administration starts in our training set, and 1,457 in our test set. The positive predictive value of this model for the top 25% riskiest new start antineoplastic patients is 0.53. From over 1,400 data features, the model was refined to include 400 clinically relevant ones spanning demographics, pathology, clinician notes, labs, medications, and psychosocial information. At the patient level, specific features determining risk are surfaced in a web application, RiskExplorer, to enable clinician review of individual patient risk. This physician facing application provides the individual risk score for the patient as well as their quartile of risk when compared to the population of new start antineoplastic patients. For the top quartile of patients, the risk for an ED visit within the first 6 months of treatment is greater than or equal to 49%. Conclusions: We have constructed a framework to build a clinically relevant risk model. We are now piloting it to identify those likely to benefit from a home-based, digital symptom management intervention.

2018 ◽  
Vol 36 (30_suppl) ◽  
pp. 314-314 ◽  
Author(s):  
Robert Michael Daly ◽  
Dmitriy Gorenshteyn ◽  
Lior Gazit ◽  
Stefania Sokolowski ◽  
Kevin Nicholas ◽  
...  

314 Background: Acute care accounts for half of cancer expenditures and is a measure of poor quality care. Identifying patients at high risk for ED visits enables institutions to target symptom management resources to those most likely to benefit. Risk stratification models developed to date have not been meaningfully employed in oncology, and there is a need for clinically relevant models to improve patient care. Methods: We established a predictive analytics framework for clinical use with attention to the modeling technique, clinician feedback, and application metrics. The model employs EHR data from initial visit to first antineoplastic administration for new patients at our institution from January 2014 to June 2017. The binary dependent variable is occurrence of an ED visit within the first 6 months of treatment. From over 1,400 data features, the model was refined to include 400 clinically relevant ones spanning demographics, pathology, clinician notes, labs, medications, and psychosocial information. Clinician review was performed to confirm EHR data input validity. The final regularized multivariate logistic regression model was chosen based on clinical and statistical significance. Parameter selection and model evaluation utilized the positive predictive value for the top 25% of observations ranked by model-determined risk. The final model was evaluated using a test set containing 20% of randomly held out data. The model was calibrated based on a 5-fold cross-validation scheme over the training set. Results: There are 5,752 antineoplastic starts in our training set, and 1,457 in our test set. The positive predictive value of this model for the top 25% riskiest new start antineoplastic patients is 0.53. The 400 clinically relevant features draw from multiple areas in the EHR. For example, those features found to increase risk include: combination chemotherapy, low albumin, social work needs, and opioid use, whereas those found to decrease risk include stage 1 disease, never smoker status, and oral antineoplastic therapy. Conclusions: We have constructed a framework to build a clinically relevant model. We are now piloting it to identify those likely to benefit from a home-based, digital symptom management intervention.


2018 ◽  
Vol 36 (34_suppl) ◽  
pp. 144-144
Author(s):  
Robert Michael Daly ◽  
Dmitriy Gorenshteyn ◽  
Lior Gazit ◽  
Stefania Sokolowski ◽  
Kevin Nicholas ◽  
...  

144 Background: Acute care accounts for half of cancer expenditures and is a measure of poor quality care. Identifying patients at high risk for ED visits enables institutions to target symptom management resources to those most likely to benefit. Risk stratification models developed to date have not been meaningfully employed in oncology, and there is a need for clinically relevant models to improve patient care. Methods: We established a predictive analytics framework for clinical use with attention to the modeling technique, clinician feedback, and application metrics. The model employs EHR data from initial visit to first antineoplastic administration for new patients at our institution from January 2014 to June 2017. The binary dependent variable is occurrence of an ED visit within the first 6 months of treatment. From over 1,400 data features, the model was refined to include 400 clinically relevant ones spanning demographics, pathology, clinician notes, labs, medications, and psychosocial information. Clinician review was performed to confirm EHR data input validity. The final regularized multivariate logistic regression model was chosen based on clinical and statistical significance. Parameter selection and model evaluation utilized the positive predictive value for the top 25% of observations ranked by model-determined risk. The final model was evaluated using a test set containing 20% of randomly held out data. The model was calibrated based on a 5-fold cross-validation scheme over the training set. Results: There are 5,752 antineoplastic starts in our training set, and 1,457 in our test set. The positive predictive value of this model for the top 25% riskiest new start antineoplastic patients is 0.53. The 400 clinically relevant features draw from multiple areas in the EHR. For example, those features found to increase risk include: combination chemotherapy, low albumin, social work needs, and opioid use, whereas those found to decrease risk include stage 1 disease, never smoker status, and oral antineoplastic therapy. Conclusions: We have constructed a framework to build a clinically relevant model. We are now piloting it to identify those likely to benefit from a home-based, digital symptom management intervention.


2020 ◽  
Vol 41 (S1) ◽  
pp. s39-s39
Author(s):  
Pontus Naucler ◽  
Suzanne D. van der Werff ◽  
John Valik ◽  
Logan Ward ◽  
Anders Ternhag ◽  
...  

Background: Healthcare-associated infection (HAI) surveillance is essential for most infection prevention programs and continuous epidemiological data can be used to inform healthcare personal, allocate resources, and evaluate interventions to prevent HAIs. Many HAI surveillance systems today are based on time-consuming and resource-intensive manual reviews of patient records. The objective of HAI-proactive, a Swedish triple-helix innovation project, is to develop and implement a fully automated HAI surveillance system based on electronic health record data. Furthermore, the project aims to develop machine-learning–based screening algorithms for early prediction of HAI at the individual patient level. Methods: The project is performed with support from Sweden’s Innovation Agency in collaboration among academic, health, and industry partners. Development of rule-based and machine-learning algorithms is performed within a research database, which consists of all electronic health record data from patients admitted to the Karolinska University Hospital. Natural language processing is used for processing free-text medical notes. To validate algorithm performance, manual annotation was performed based on international HAI definitions from the European Center for Disease Prevention and Control, Centers for Disease Control and Prevention, and Sepsis-3 criteria. Currently, the project is building a platform for real-time data access to implement the algorithms within Region Stockholm. Results: The project has developed a rule-based surveillance algorithm for sepsis that continuously monitors patients admitted to the hospital, with a sensitivity of 0.89 (95% CI, 0.85–0.93), a specificity of 0.99 (0.98–0.99), a positive predictive value of 0.88 (0.83–0.93), and a negative predictive value of 0.99 (0.98–0.99). The healthcare-associated urinary tract infection surveillance algorithm, which is based on free-text analysis and negations to define symptoms, had a sensitivity of 0.73 (0.66–0.80) and a positive predictive value of 0.68 (0.61–0.75). The sensitivity and positive predictive value of an algorithm based on significant bacterial growth in urine culture only was 0.99 (0.97–1.00) and 0.39 (0.34–0.44), respectively. The surveillance system detected differences in incidences between hospital wards and over time. Development of surveillance algorithms for pneumonia, catheter-related infections and Clostridioides difficile infections, as well as machine-learning–based models for early prediction, is ongoing. We intend to present results from all algorithms. Conclusions: With access to electronic health record data, we have shown that it is feasible to develop a fully automated HAI surveillance system based on algorithms using both structured data and free text for the main healthcare-associated infections.Funding: Sweden’s Innovation Agency and Stockholm County CouncilDisclosures: None


2011 ◽  
Vol 26 (S2) ◽  
pp. 1649-1649
Author(s):  
J. Stefansson ◽  
P. Nordström ◽  
J. Jokinen

ObjectiveTo assess the predictive value of the Suicide Intent Scale in patients with a high suicide risk. The secondary aim was to assess if the use of the factors of the Suicide Intent Scale would offer a better predictive value in case detection. Finally a short version of the scale was created after an item analysis.MethodEighty-one suicide attempters were assessed with the Beck‘s Suicide Intent Scale (SIS). All patients were followed up for cause of death. Receiver-operating characteristic (ROC) curves and tables were created to establish the optimal cut-off values for SIS and SIS factors to predict suicide.ResultsSeven patients committed suicide during a mean follow up of 9.5 years. The major finding was that mean SIS distinguished between suicides and survivors. The positive predictive value was 16.7% and the AUC was 0.74. Only the planning subscale reached the statistical significance. Four items were used to test a short version of SIS in the suicide prediction. The positive predictive value was 19% and the AUC was 0.82.ConclusionsThe Suicide Intent Scale is a valuable tool in clinical suicide risk assessment, a short version of the scale may offer a better predictive value.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S251-S251
Author(s):  
Joanna S Cavalier ◽  
Benjamin Goldstein ◽  
Cara L O’Brien ◽  
Armando Bedoya

Abstract Background The novel coronavirus disease (COVID-19) results in severe illness in a significant proportion of patients, necessitating a way to discern which patients will become critically ill and which will not. In one large case series, 5.0% of patients required an intensive care unit (ICU) and 1.4% died. Several models have been developed to assess decompensating patients. However, research examining their applicability to COVID-19 patients is limited. An accurate predictive model for patients at risk of decompensation is critical for health systems to optimally triage emergencies, care for patients, and allocate resources. Methods An early warning score (EWS) algorithm created within a large academic medical center, with methodology previously described, was applied to COVID-19 patients admitted to this institution. 122 COVID-19 patients were included. A decompensation event was defined as inpatient mortality or an unanticipated transfer to an ICU from an intermediate medical ward. The EWS was calculated at 12-hour and 24-hour intervals. Results Of 122 patients admitted with COVID-19, 28 had a decompensation event, yielding an event rate of 23.0%. 8 patients died, 13 transferred to the ICU, and 6 both transferred to the ICU and died. Decompensation within 12 and 24 hours were predicted with areas under the curve (AUC) of 0.850 and 0.817, respectively. Using a three-tiered risk model, use of the customized EWS score for patients identified as high risk of decompensation had a positive predictive value of 44.4% and 11.1% and specificity of 99.3% and 99.6% and 12- and 24-hour intervals. Amongst medium-risk patients, the score had a specificity of 85.0% and 85.4%, respectively. Conclusion This EWS allows for prediction of decompensation, defined as transfer to an ICU or death, in COVID-19 patients with excellent specificity and a high positive predictive value. Clinically, implementation of this score can help to identify patients before they decompensate in order to triage at time of presentation and allocate step-down beds, ICU beds, and treatments such as remdesivir. Disclosures All Authors: No reported disclosures


EP Europace ◽  
2021 ◽  
Vol 23 (Supplement_3) ◽  
Author(s):  
A Badiul ◽  
C Iorgulescu ◽  
S Bogdan ◽  
A Radu ◽  
S Paja ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: None. Introduction   Catheter ablation of accessory pathways (AP) located in the posterior pyramidal space are often challenging due to its anatomical complexity. Scarce of data are available about the ECG features that might indicate when an epicardial approach is required in the ablation of the posteroseptal AP. Objective The purpose of this retrospective study was to describe the electrocardiographic features of posteroseptal AP which have been successfully ablated with epicardial approach and identify electrocardiographic predictors for epicardial AP location. Methods The 12 leads ECG of 75 patients with posteroseptal accessory pathways who were successfully ablated were retrospectively analyzed. ECG features for epicardial location described already in published studies have been considered (negative delta wave in DII, positive delta wave in aVR, high amplitude S wave in V6). Additionally the characteristics of the initial 40 ms of the delta wave in lead V1 (measured from the earliest QRS deflection in 12 leads) during full pre-excitation have been investigated. Results Of 75 patients with posteroseptal AP that undergone catheter ablation, 40 (53.3%) had successful epicardial ablation. An initial isoelectric or biphasic delta wave in lead V1 proved the highest sensitivity (82.5 %) respectively positive predictive value (97 %) and  specificity (97 %) for an epicardial location of the AP. Deep S wave in V6 proved lower sensitivity (37.5%) and positive predictive value (68%) but higher specificity (80%) for epicardial location of AP. The specificity and sensitivity for epicardial location of AP of negative delta wave in DII were lower, however it failed to reach statistical significance.  Conclusion This study shows that an initially isoelectric or biphasic delta wave aspect in lead V1 has a higher specificity, sensitivity and positive predictive value than previously described ECG markers for epicardial location of posteroseptal accessory pathways.


2021 ◽  
Vol 9 ◽  
Author(s):  
Elham Hatef ◽  
Gurmehar Singh Deol ◽  
Masoud Rouhizadeh ◽  
Ashley Li ◽  
Katyusha Eibensteiner ◽  
...  

Introduction: Despite the growing efforts to standardize coding for social determinants of health (SDOH), they are infrequently captured in electronic health records (EHRs). Most SDOH variables are still captured in the unstructured fields (i.e., free-text) of EHRs. In this study we attempt to evaluate a practical text mining approach (i.e., advanced pattern matching techniques) in identifying phrases referring to housing issues, an important SDOH domain affecting value-based healthcare providers, using EHR of a large multispecialty medical group in the New England region, United States. To present how this approach would help the health systems to address the SDOH challenges of their patients we assess the demographic and clinical characteristics of patients with and without housing issues and briefly look into the patterns of healthcare utilization among the study population and for those with and without housing challenges.Methods: We identified five categories of housing issues [i.e., homelessness current (HC), homelessness history (HH), homelessness addressed (HA), housing instability (HI), and building quality (BQ)] and developed several phrases addressing each one through collaboration with SDOH experts, consulting the literature, and reviewing existing coding standards. We developed pattern-matching algorithms (i.e., advanced regular expressions), and then applied them in the selected EHR. We assessed the text mining approach for recall (sensitivity) and precision (positive predictive value) after comparing the identified phrases with manually annotated free-text for different housing issues.Results: The study dataset included EHR structured data for a total of 20,342 patients and 2,564,344 free-text clinical notes. The mean (SD) age in the study population was 75.96 (7.51). Additionally, 58.78% of the cohort were female. BQ and HI were the most frequent housing issues documented in EHR free-text notes and HH was the least frequent one. The regular expression methodology, when compared to manual annotation, had a high level of precision (positive predictive value) at phrase, note, and patient levels (96.36, 95.00, and 94.44%, respectively) across different categories of housing issues, but the recall (sensitivity) rate was relatively low (30.11, 32.20, and 41.46%, respectively).Conclusion: Results of this study can be used to advance the research in this domain, to assess the potential value of EHR's free-text in identifying patients with a high risk of housing issues, to improve patient care and outcomes, and to eventually mitigate socioeconomic disparities across individuals and communities.


2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S264-S264
Author(s):  
Yesha Malik ◽  
Amy Dupper ◽  
Jaclyn Cusumano ◽  
Dhruv Patel ◽  
Kathryn Twyman ◽  
...  

Abstract Background Candidemia is a rare but serious complication of SARS-CoV-2 hospitalization. Combining non-culture and culture-based diagnostics allows earlier identification of candidemia. Given higher reported incidence during COVID-19 surges, we investigated the use of (1-3)-β-D-glucan (BDG) assay at our institution in those who did and did not develop candidemia. Methods Retrospective study of adults admitted to The Mount Sinai Hospital between March 15-June 30 2020 for SARS-CoV-2 infection, with either ≥1 BDG assay or positive fungal blood culture. Data was collected with the electronic medical record and Vigilanz. A BDG value ≥ 80 was used as a positivity cutoff. Differences in mortality were assessed by univariate logistic regression using R (version 4.0.0). Statistical significance was measured by P value < .05. Results There were 75 patients with ≥1 BDG assay resulted and 28 patients with candidemia, with an overlap of 9 between the cohorts. Among the 75 who had BDG assay, 23 resulted positive and 52 negative. Nine of 75 patients developed candidemia. Of the 23 with a positive assay, 5 developed candidemia and 18 did not. Seventeen of the 18 had blood cultures drawn within 7 days +/- of BDG assay. Four patients with candidemia had persistently negative BDG; 2 had cultures collected within 7 days +/- of BDG assay. With a cut-off of >80, the negative predictive value (NPV) was 0.92. When the cut-off increased to >200, NPV was 0.97 and positive predictive value (PPV) was 0.42. Average antifungal days in patients with negative BDG was 2.6 vs. 4.2 in those with a positive. Mortality was 74% in those with ≥1 positive BDG vs. 50% in those with persistently negative BDGs. There was a trend towards higher odds of death in those with positive BDG (OR = 2.83, 95% CI: 1.00-8.90, p < 0.06). Conclusion There was substantial use of BDG to diagnose candidemia at the peak of the COVID-19 pandemic. Blood cultures were often drawn at time of suspected candidemia but not routinely. When cultures and BDG were drawn together, BDG had a high NPV but low PPV. High NPV of BDG likely contributed to discontinuation of empiric antifungals. The candidemic COVID-19 patients had high mortality, so further investigation of algorithms for the timely diagnosis of candidemia are needed to optimize use of antifungals while improving mortality rates. Disclosures All Authors: No reported disclosures


2020 ◽  
Vol 4 (Supplement_1) ◽  
Author(s):  
David Tyler Broome ◽  
Robert Naples ◽  
Richard Bailey ◽  
James F Bena ◽  
Joseph Scharpf ◽  
...  

Abstract Primary hyperparathyroidism is characterized by excessive dysregulated production of parathyroid hormone (PTH) by 1 or more abnormal parathyroid glands. Preoperative localization is important for surgical planning in primary hyperparathyroidism. Previously, it had been published that ultrasound (sensitivity of 76.1%, positive predictive value of 93.2%) and nuclear scintigraphy (Sestamibi-SPECT) (sensitivity of 78.9%, and a positive predictive value of 90.7%) are first line imaging modalities1. Currently, the imaging modality of choice varies according to region and institutional protocol. The aim of this study was to evaluate the imaging modality that is associated with an improved remission rate based on concordance with operative findings. A secondary aim was to determine the effect of additive imaging on remission rates. This was an IRB-approved retrospective review of 2657 patients with primary hyperparathyroidism undergoing surgery at a tertiary referral center from 2004–2017. Analyses were performed with SAS software using a 95% confidence interval (p<0.05) for statistical significance. After excluding re-operative and familial cases, 2079 patients met study criteria. There were 422 (20.3%) male and 1657 (79.7%) female patients with a mean age of 66 (+12.2) years, of which 1723 (82.9%) of patients were white and 294 (14.1%) patients were black. Ultrasound (US) was performed in 1891 (91.9%), sestamibi with SPECT (sestamibi/SPECT) in 1945 (93.6%), and CT in 98 (4.7%) patients. Of these, 1721 (82.8%) had combined US and sestamibi/SPECT. US was surgeon-performed in 94.2% of cases and 89.9% of the patients underwent a four gland exploration. Overall, US concordance was 52.4%, sestamibi/SPECT was 45.5%, and CT was 45.9%.US and sestamibi/SPECT both had an improved remission rate if concordant with operative findings, while CT had no effect (US p=0.04; sestamibi/SPECT p=0.01; CT p=0.50). The overall remission rate was 94% (CI=0.93–0.95), however, increasing the number of imaging modalities performed did not increase the remission rate (p=0.76) or concordance with operative findings (p=0.05). Despite having low concordance rates, US and sestamibi/SPECT that agreed with operative findings were associated with higher remission rates. Therefore, when imaging is to be used for localization, our data support the use of US and sestamibi/SPECT as the initial imaging modalities of choice for preoperative localization. 1Kuzminski SJ, Sosa JA, Hoang JK. Update in Parathyroid Imaging. Magn Reson Imaging Clin N Am. 2018;26(1): 151–166.


2012 ◽  
Vol 30 (15_suppl) ◽  
pp. 5094-5094
Author(s):  
Francesco Plotti ◽  
Marzio Angelo Zullo ◽  
Michela Angelucci ◽  
Irma Oronzi ◽  
Patrizio Damiani ◽  
...  

5094 Background: In endometrial cancer, there are no markers routinely used in clinical practice. This study prospectively investigates the sensitivity and specificity of new marker HE4 in detection of endometrial cancer. Methods: Serum samples were prospectively obtained 24 hours before surgery from 25 patients with endometrial cancer and from 25 patients with uterine benign pathology, operated from January 2011 to October 2011 at University Campus Bio-Medico of Rome. Preoperative CA125 levels were evaluated by a one-step “sandwich” radioimmunoassay. HE4 levels were determined using the HE4 enzymatic immune assay. The CA125 normal value is considered less than 35 U/mL. Two HE4 cut-off are considered: less than 70 pmol/L and less than 150 pmol/L. The specificity analysis was performed using the parametric T-Test for comparing the HE4 series and the Mann-Whitney test for the CA125 series. The level of statistical significance is set at p < 0.05. Results: The sensitivity of CA125 in detecting endometrial cancer is 16% whereas the sensitivity of HE4 is 48% and 28 % for 70 pmol/L and 150 pmol/L cut-off respectively. The specificity of HE4 is 100% (positive predictive value = 100%, negative predictive value = 65.79% and 58.14% considering the two HE4 cut-off, respectively), whereas the CA125 specificity is 72 % (positive predictive value = 36.36%, negative predictive value = 46.15%) in detection of endometrial cancer. Conclusions: HE4 has a good sensitivity and a specificity of 100% in detection of endometrial cancer and may be useful for detecting early stage endometrial cancer. In particular the HE4 at cut-off of 70 pmol/L yields the best sensitivity and specificity.


Sign in / Sign up

Export Citation Format

Share Document