scholarly journals Modified early warning score-based clinical decision support: cost impact and clinical outcomes in sepsis

JAMIA Open ◽  
2020 ◽  
Vol 3 (2) ◽  
pp. 261-268
Author(s):  
Devin J Horton ◽  
Kencee K Graves ◽  
Polina V Kukhareva ◽  
Stacy A Johnson ◽  
Maribel Cedillo ◽  
...  

Abstract Objective The objective of this study was to assess the clinical and financial impact of a quality improvement project that utilized a modified Early Warning Score (mEWS)-based clinical decision support intervention targeting early recognition of sepsis decompensation. Materials and Methods We conducted a retrospective, interrupted time series study on all adult patients who received a diagnosis of sepsis and were exposed to an acute care floor with the intervention. Primary outcomes (total direct cost, length of stay [LOS], and mortality) were aggregated for each study month for the post-intervention period (March 1, 2016–February 28, 2017, n = 2118 visits) and compared to the pre-intervention period (November 1, 2014–October 31, 2015, n = 1546 visits). Results The intervention was associated with a decrease in median total direct cost and hospital LOS by 23% (P = .047) and .63 days (P = .059), respectively. There was no significant change in mortality. Discussion The implementation of an mEWS-based clinical decision support system in eight acute care floors at an academic medical center was associated with reduced total direct cost and LOS for patients hospitalized with sepsis. This was seen without an associated increase in intensive care unit utilization or broad-spectrum antibiotic use. Conclusion An automated sepsis decompensation detection system has the potential to improve clinical and financial outcomes such as LOS and total direct cost. Further evaluation is needed to validate generalizability and to understand the relative importance of individual elements of the intervention.

2021 ◽  
Author(s):  
Sarah Collins Rossetti ◽  
Patricia C. Dykes ◽  
Chris Knaplund ◽  
Min-Jeoung Kang ◽  
Kumiko Schnock ◽  
...  

BACKGROUND The overarching goal of the COmmunicating Narrative Concerns Entered by RNs (CONCERN) study is to implement and evaluate an early warning score system which provides clinical decision support (CDS) in electronic health record systems. The CONCERN CDS uses nursing documentation patterns as indicators of nurses’ increased surveillance to predict when patients are at risk of clinical deterioration. OBJECTIVE The objective of this cluster randomized pragmatic clinical trial is to evaluate the effectiveness and usability of the CONCERN CDS system at two different study sites. The specific aim is to decrease hospitalized patients’ negative health outcomes (in-hospital mortality, length of stay, cardiac arrest, unanticipated ICU transfers, and 30-day hospital readmission rates). METHODS A multiple time-series intervention consisting of three phases will be performed through a one-year period during the cluster randomized pragmatic clinical trial. It deals with a series of processes from system release to evaluation. The system release includes CONCERN CDS implementation and user training. Then, a mixed methods approach will be conducted with end-users to assess the system and clinician perspectives. RESULTS Study results are expected in 2022. CONCLUSIONS The CONCERN CDS will increase team-based situational awareness and shared understanding of patients predicted to be at risk for clinical deterioration in need of intervention to prevent mortality and associated harm. CLINICALTRIAL ClinicalTrials.gov | Identifier: NCT03911687


2013 ◽  
Vol 31 (31_suppl) ◽  
pp. 233-233
Author(s):  
Jeremy B. Shelton ◽  
Lee Ochotorena ◽  
Carol J. Bennett ◽  
Paul Shekelle ◽  
Caroline Goldzweig

233 Background: The value of PSA-based screening for prostate cancer is a topic of intense debate, however the Veterans Health Administration's (VHA) national clinical policy is to use age as a proxy for life expectancy and avoid screening in men ≥ age 75. To facilitate this we developed and implemented a highly specific computerized clinical decision support (CCDS) reminder to remind providers of current guidelines, at the moment of entering an inappropriate PSA order. Methods: We defined screening PSA as: any PSA ordered on men excluding those a) with a diagnosis of existing malignant prostate disease or “elevated prostate specific antigen”, b) who are using either enhancers or suppressors of testosterone, or d) who had a PSA of 2.5ng/ml or greater on either of the two most recent PSA tests. We measured PSA-based prostate cancer screening rates using this definition and on a monthly basis from 07/2011 to 07/2013. Using an interrupted time-series design, we turned the reminder on from 6/2012-8/2012 and then again from 1/2013-4/2013. Results: There were a total of 24,705 men eligible for screening during the two year period of analysis and 1,524 men who were screened. The mean screening rate during the 12 months prior to the study period was 7.8% among men, and during the 12 months of the intervention period it was 4.3%. During the 12 month baseline period the screening rate declined by 29.3%. During the two periods when the CCDS tool was turned on the screening rate feel by 59.7% and 29.8%, whereas during the two periods when it was off, it rose by 84.3% and 18.4%. Conclusions: The overall reduction in screening rate before and after the intervention period is likely substantially confounded by the secular event of the May, 2012 release of the USPSTF grade D recommendation against all PSA-based screening and its substantial media coverage. Despite this, the striking correlation between rate of change in screening rate and the turning on and off of the CCDS tool, suggests that this highly specific CCDS tool was able to reduce inappropriate PSA-based screening, even in an era of significant public discussion of the merits of PSA-based prostate cancer screening.


2020 ◽  
Vol 41 (S1) ◽  
pp. s92-s93
Author(s):  
Omar Elsayed-Ali ◽  
Swaminathan Kandaswamy ◽  
Andi Shane ◽  
Stephanie Jernigan ◽  
Patricia Lantis ◽  
...  

Background: Pediatric influenza vaccination rates remain <50% in the United States. Children with chronic medical conditions are at higher risk of morbidity and mortality from influenza, yet most experience missed opportunities for immunization in outpatient settings. In an adult cohort study, 74% of patients who had not received the influenza vaccine before or during hospitalization remained unvaccinated through the rest of the season. Thus, inpatient settings represent another important opportunity for vaccinating an especially susceptible population. In addition, 4 published studies have shown promise in improving inpatient pediatric influenza vaccination. However, these studies had limited effect sizes and included interventions requiring ongoing maintenance with dedicated staff. In this study, we hypothesized that a clinical decision support (CDS) intervention designed with user-centered design principles would increase inpatient influenza vaccine administration rates in the 2019–2020 influenza season. Methods: We performed a workflow analysis of different care settings to determine optimal timing of influenza vaccine decision support. Through formative usability testing with frontline clinicians, we developed electronic health record (EHR) prototypes of an order set module containing a default influenza vaccine order. This module was dynamically incorporated into order sets for patients meeting the following criteria: ≥6 months old, no prior influenza vaccine in the current season in our medical system or the state immunization registry, and no prior anaphylaxis to the vaccine. We implemented the CDS into select order sets based on operational leader support. We compared the proportion of eligible hospitalized patients in which the influenza vaccine was administered between our intervention period and the 2018–2019 season (historical controls). To account for secular trends, we also compared the vaccination rates for hospitalized patients exposed to our CDS to those that were not exposed to the CDS during the intervention period (concurrent controls). Results: During the intervention period (September 5, 2019–November 1, 2019), influenza vaccine was administered to 762 of 3,242 (24%) of eligible patients, compared to 360 of 2,875 (13%) among historical controls (P < .0001). Among the 42% of patients exposed to the CDS, vaccination rates were 33% compared to 9% for concurrent controls (p < .0001). Our intervention was limited by end-user uptake, with some physicians or nurses discontinuing the default vaccine order. In addition, early in the intervention, some vaccines were ordered but not administered, leading to vaccine waste. Conclusions: CDS targeting eligible hospitalized patients for influenza vaccination incorporated early into the workflow of nurses and ordering clinicians can substantially improve influenza vaccination rates among this susceptible and hard-to-reach population.Funding: NoneDisclosures: None


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262193
Author(s):  
Monica I. Lupei ◽  
Danni Li ◽  
Nicholas E. Ingraham ◽  
Karyn D. Baum ◽  
Bradley Benson ◽  
...  

Objective To prospectively evaluate a logistic regression-based machine learning (ML) prognostic algorithm implemented in real-time as a clinical decision support (CDS) system for symptomatic persons under investigation (PUI) for Coronavirus disease 2019 (COVID-19) in the emergency department (ED). Methods We developed in a 12-hospital system a model using training and validation followed by a real-time assessment. The LASSO guided feature selection included demographics, comorbidities, home medications, vital signs. We constructed a logistic regression-based ML algorithm to predict “severe” COVID-19, defined as patients requiring intensive care unit (ICU) admission, invasive mechanical ventilation, or died in or out-of-hospital. Training data included 1,469 adult patients who tested positive for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) within 14 days of acute care. We performed: 1) temporal validation in 414 SARS-CoV-2 positive patients, 2) validation in a PUI set of 13,271 patients with symptomatic SARS-CoV-2 test during an acute care visit, and 3) real-time validation in 2,174 ED patients with PUI test or positive SARS-CoV-2 result. Subgroup analysis was conducted across race and gender to ensure equity in performance. Results The algorithm performed well on pre-implementation validations for predicting COVID-19 severity: 1) the temporal validation had an area under the receiver operating characteristic (AUROC) of 0.87 (95%-CI: 0.83, 0.91); 2) validation in the PUI population had an AUROC of 0.82 (95%-CI: 0.81, 0.83). The ED CDS system performed well in real-time with an AUROC of 0.85 (95%-CI, 0.83, 0.87). Zero patients in the lowest quintile developed “severe” COVID-19. Patients in the highest quintile developed “severe” COVID-19 in 33.2% of cases. The models performed without significant differences between genders and among race/ethnicities (all p-values > 0.05). Conclusion A logistic regression model-based ML-enabled CDS can be developed, validated, and implemented with high performance across multiple hospitals while being equitable and maintaining performance in real-time validation.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S61-S61
Author(s):  
Anna Sick-Samuels ◽  
Jules Bergmann ◽  
Matthew Linz ◽  
James Fackler ◽  
Sean Berenholtz ◽  
...  

Abstract Background Clinicians obtain endotracheal aspirate (ETA) cultures from mechanically ventilated patients in the pediatric intensive care unit (PICU) for the evaluation of ventilator-associated infection (i.e., tracheitis or pneumonia). Positive cultures prompt clinicians to treat with antibiotics even though ETA cultures cannot distinguish bacterial colonization from infection. We undertook a quality improvement initiative to standardize the use of endotracheal cultures in the evaluation of ventilator-associated infections among hospitalized children. Methods A multidisciplinary team developed a clinical decision support algorithm to guide when to obtain ETA cultures from patients admitted to the PICU and ventilated for >1 day. We disseminated the algorithm to all bedside providers in the PICU in April 2018 and compared the rate of cultures one year before and after the intervention using Poisson regression and a quasi-experimental interrupted time-series models. Charge savings were estimated based on $220 average charge for one ETA culture. Results In the pre-intervention period, there was an average of 46 ETA cultures per month, a total of 557 cultures over 5,092 ventilator-days; after introduction of the algorithm, there were 19 cultures obtained per month, a total of 231 cultures over 3,554 ventilator-days (incident rate 10.9 vs. 6.5 per 100 ventilator-days, Figure 1). There was a 43% decrease in the monthly rate of cultures (IRR 0.57, 95% CI 0.50–0.67, P < 0.001). The ITSA revealed a pre-existing 2% decline in the monthly culture rate (IRR 0.98, 95% CI 0.97–1.00, P = 0.01), an immediate 44% drop (IRR 0.56, 95% CI 0.45–0.69, P = 0.02) and a stable rate in the post-intervention period (IRR 1.03, 95% CI 0.99–1.07, P = 0.09). The intervention led to an estimated $6000 in monthly charge savings. Conclusion Introduction of a clinical decision support algorithm to standardize the obtainment of ETA cultures from ventilated children was associated with a significant decline in the rate of ETA cultures. Additional investigation will assess the impact on balancing measures and secondary outcomes including mortality, duration of ventilation, duration of admission, readmissions, and antibiotic prescribing. Disclosures All Authors: No reported Disclosures.


2020 ◽  
Vol 41 (S1) ◽  
pp. s126-s127
Author(s):  
Sonya Kothadia ◽  
Samantha Blank ◽  
Tania Campagnoli ◽  
Mhd Hashem Rajabbik ◽  
Tiffany Wiksten ◽  
...  

Background: In an effort to reduce inappropriate testing of hospital-onset Clostridioides difficile infection (HO-CDI), we sequentially implemented 2 strategies: an electronic health record-based clinical decision support tool that alerted ordering physicians about potentially inappropriate testing without a hard stop (intervention period 1), replaced by mandatory infectious diseases attending physician approval for any HO-CDI test order (intervention period 2). We analyzed appropriate HO-CDI testing rates of both intervention periods. Methods: We performed a retrospective study of patients 18 years or older who had an HO-CDI test (performed after hospital day 3) during 3 different periods: baseline (no intervention, September 2014–February 2015), intervention 1 (clinical decision support tool only, April 2015–September 2015), and intervention 2 (ID approval only, December 2017–September 2018). From each of the 3 periods, we randomly selected 150 patients who received HO-CDI testing (450 patients total). We restricted the study to the general medicine, bone marrow transplant, medical intensive care, and neurosurgical intensive care units. We assessed each HO-CDI test for appropriateness (see Table 1 for criteria), and we compared rates of appropriateness using the χ2 test or Kruskall-Wallis test, where appropriate. Results: In our cohort of 450 patients, the median age was 61 years, and the median hospital length of stay was 20 days. The median hospital day that HO-CDI testing was performed differed among the 3 groups: 12 days at baseline, 10 days during intervention 1, and 8.5 days during intervention 2 (P < .001). Appropriateness of HO-CDI testing increased from the baseline with both interventions, but mandatory ID approval was associated with the highest rate of testing appropriateness (Fig. 1). Reasons for inappropriate ordering did not differ among the periods, with <3 documented stools being the most common criterion for inappropriateness. During intervention 2, among the 33 inappropriate tests, 8 (24%) occurred where no approval from an ID attending was recorded. HO-CDI test positivity rates during the 3 time periods were 12%, 11%, and 21%, respectively (P = .03). Conclusions: We found that both the clinical decision support tool and mandatory ID attending physician approval interventions improved appropriateness of HO-CDI testing. Mandatory ID attending physician approval leading to the highest appropriateness rate. Even with mandatory ID attending physician approval, some tests continued to be ordered inappropriately per retrospective chart review; we suspect that this is partly explained by underdocumentation of criteria such as stool frequency. In healthcare settings where appropriateness of HO-CDI testing is not optimal, mandatory ID attending physician approval may provide an option beyond clinical decision-support tools.Funding: NoneDisclosures: None


2013 ◽  
Vol 46 (2) ◽  
pp. 52
Author(s):  
CHRISTOPHER NOTTE ◽  
NEIL SKOLNIK

Sign in / Sign up

Export Citation Format

Share Document