scholarly journals A Redesigned Order Entry System for Reducing Low-Value Preprocedural Cardiology Consultations: Quality-Improvement Cohort Study

10.2196/17669 ◽  
2020 ◽  
Vol 3 (1) ◽  
pp. e17669
Author(s):  
David E Winchester ◽  
Leigh Cagino

Background Preprocedural cardiac evaluation is a common reason for outpatient cardiology visits. Many patients who are referred to cardiology clinics for preprocedural evaluation are at low risk of perioperative events and do not require any further management. Our facility treats patients over a large geographic area; avoiding low-value consultations reduces time and travel burdens for patients. Objective Our study objective was to assess the impact of a novel algorithm in the electronic order entry system aimed to guide clinicians toward patients who may benefit from cardiovascular referral. Methods We retrospectively reviewed in-person consultations and electronic consultations (e-consults) to our cardiology service before and after implementation of the novel algorithm to assess changes in patterns of care. Data were stored in a custom electronic database on internal servers. Results We reviewed 603 consultations to our cardiology clinic and found that 89 (14.7%) were sent for preprocedural evaluation. Of these, 39 (43.8% of preprocedural consultations) were e-consults. After implementation, we reviewed 360 consultations. The proportion of consultations for preprocedural evaluation did not decrease (n=47, 13.0%; P=.39). We observed an absolute increase of 13.6% in the proportion of consultations ordered as e-consults (27/47, 57.4%). During the postintervention period, we received no remarks, concerns, or criticisms from ordering clinicians about the process change and no reports of adverse events. Conclusions Implementation of an ordering algorithm to reduce low-value preprocedural cardiology evaluations did not lead to a reduction in the number of overall preprocedural cardiology consultations. The number of patients seen electronically increased, potentially improving clinic access and reducing travel burden for patients.

2020 ◽  
pp. 1-6
Author(s):  
Paul Park ◽  
Victor Chang ◽  
Hsueh-Han Yeh ◽  
Jason M. Schwalb ◽  
David R. Nerenz ◽  
...  

OBJECTIVEIn 2017, Michigan passed new legislation designed to reduce opioid abuse. This study evaluated the impact of these new restrictive laws on preoperative narcotic use, short-term outcomes, and readmission rates after spinal surgery.METHODSPatient data from 1 year before and 1 year after initiation of the new opioid laws (beginning July 1, 2018) were queried from the Michigan Spine Surgery Improvement Collaborative database. Before and after implementation of the major elements of the new laws, 12,325 and 11,988 patients, respectively, were treated.RESULTSPatients before and after passage of the opioid laws had generally similar demographic and surgical characteristics. Notably, after passage of the opioid laws, the number of patients taking daily narcotics preoperatively decreased from 3783 (48.7%) to 2698 (39.7%; p < 0.0001). Three months postoperatively, there were no differences in minimum clinically important difference (56.0% vs 58.0%, p = 0.1068), numeric rating scale (NRS) score of back pain (3.5 vs 3.4, p = 0.1156), NRS score of leg pain (2.7 vs 2.7, p = 0.3595), satisfaction (84.4% vs 84.7%, p = 0.6852), or 90-day readmission rate (5.8% vs 6.2%, p = 0.3202) between groups. Although there was no difference in readmission rates, pain as a reason for readmission was marginally more common (0.86% vs 1.22%, p = 0.0323).CONCLUSIONSThere was a meaningful decrease in preoperative narcotic use, but notably there was no apparent negative impact on postoperative recovery, patient satisfaction, or short-term outcomes after spinal surgery despite more restrictive opioid prescribing. Although the readmission rate did not significantly increase, pain as a reason for readmission was marginally more frequently observed.


2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
Hannah Elkadi ◽  
Eleanor Dodd ◽  
Theodore Poulton ◽  
William Bolton ◽  
Joshua Burke ◽  
...  

Abstract Aims Despite being the most common surgical procedure, there is wide variation that exists in the management of simple subcutaneous abscesses with no national guideline describing best practice. During the COVID-19 Pandemic national guidelines promoted the use of regional or local anaesthetic (LA) instead of general anaesthesia (GA) to avoid aerosol generating intubation associated with GA. This study aimed to assess the impact of anaesthetic choice in outcomes following incision and drainage of subcutaneous abscesses. Methods Two cohorts of patients undergoing abscess incision and drainage at St. James’ University Hospital Leeds were retrospectively identified over a 14-week period before and after the introduction of the new COVID-19 anaesthetic guidelines. Wound healing surrogate endpoints were used: i) total number of follow up appointments and ii) attendance to healthcare services after 30 days from I&D. Result 133 patients were included. Significantly more procedures were performed under LA after the intervention (84.1% vs 5.7%; p &lt; 0.0001) with a significant reduction in wound packing (68.3% vs 87.1%. p=0.00473). Follow up data found no significant difference in the average number of follow-up appointments (7.46 vs 5.11; p = 0.0731) and the number of patients who required ongoing treatment after 30 days (n = 14 vs n = 14, p = 0.921). Conclusion Drainage of simple subcutaneous abscess under 5 cm is safe under local anaesthetic with no significant difference in surrogate endpoints of wound healing observed in this patient cohort. Recurrent packing may not be required. Future work should explore patient reported measures such as pain management and the health economics of this intervention.


2017 ◽  
Vol 4 (suppl_1) ◽  
pp. S346-S346 ◽  
Author(s):  
Kirre Wold ◽  
Jeff Brock ◽  
Kelly Percival ◽  
Lindsey Rearigh ◽  
Lucas Vocelka ◽  
...  

Abstract Background Asymptomatic bacteriuria (ASB) is a common clinical condition identified by the presence of bacteria in the urine of a patient without signs and symptoms of a urinary tract infection (UTI). Treatment of ASB leads to unnecessary antimicrobial use and can cause more harm than benefit in many patients. This study is to determine the impact of more stringent criteria for urinalysis with culture if indicated (UAC), implemented in September 2016, on the treatment of asymptomatic bacteriuria. Methods A pre-post descriptive study of patients was conducted with an order placed for UAC in the Emergency Department (ED) or hospital. Data was collected retrospectively via chart reviews. The data on ASB patients from November 2015 to April 2016 was compared with the post-implementation period October 2016 to January 2017. The number of UAC orders and cultures were averaged for 6 months pre and post implementation of the criteria change. Results A total of 580 patient charts were assessed post-implementation of the UAC criteria change. A majority of the orders originated from the ED, (N = 430, 72.8%). ASB was treated inappropriately at a rate of 60.4% (N = 64/106) pre-implementation and a rate of 65% (N = 41/63) post implementation, P = 0.542. The total number of UAC ordered before and after implementation did not change, (N = 2852 pre-intervention vs N = 2825 post-intervention, P = 0.744), as seen in Figure 1. However, the number of reflexed urine cultures did significantly decrease post criteria change,&#x2028; (N = 1056 pre-intervention vs. N = 603 post-intervention, P &lt; 0.0001). In addition, the number of positive urine cultures also significantly decreased, (N = 378 pre-intervention vs. N = 289 post-intervention, P = 0.0447). The impact the criteria change had on patient care is the number of potential antibiotic courses saved by reflexing fewer urine cultures off the UAC. Based on the decrease in positive urine cultures, it is estimated 702 courses of inappropriate antibiotics for ASB could be saved per year (59/month). Conclusion More stringent criteria for reflex urine cultures significantly decreases the number of urine cultures performed, therefore decreasing the number of patients treated with ASB. Additional stewardship measures are necessary to reduce the treatment of ASB for patients who have cultures performed. Disclosures All authors: No reported disclosures.


CJEM ◽  
2016 ◽  
Vol 18 (4) ◽  
pp. 264-269 ◽  
Author(s):  
Andrew Gray ◽  
Christopher M.B. Fernandes ◽  
Kristine Van Aarsen ◽  
Melanie Columbus

AbstractObjectivesComputerized provider order entry (CPOE) has been established as a method to improve patient safety by avoiding medication errors; however, its effect on emergency department (ED) flow remains undefined. We examined the impact of CPOE implementation on three measures of ED throughput: wait time (WT), length of stay (LOS), and the proportion of patients that left without being seen (LWBS).MethodsWe conducted a retrospective cohort study of all ED patients of 18 years and older presenting to London Health Sciences Centre during July and August 2013 and 2014, before and after implementation of a CPOE system. The three primary variables were compared between time periods. Subgroup analyses were also conducted within each Canadian Triage and Acuity Scale (CTAS) level (1–5) individually, as well as for admitted patients only.ResultsA significant increase in WT of 5 minutes (p=0.036) and LOS of 10 minutes (p=0.001), and an increase in LWBS from 7.2% to 8.1% (p=0.002) was seen after CPOE implementation. Admitted patients’ LOS increased by 63 minutes (p<0.001), the WT of CTAS 3 and 5 patients increased by 6 minutes (p=0.001) and 39 minutes (p=0.005), and LWBS proportion increased significantly for CTAS 3–5 patients, from 24.3% to 42.0% (p<0.001) for CTAS 5 patients specifically.ConclusionsCPOE implementation detrimentally impacted all patient flow throughput measures that we examined. The most striking clinically relevant result was the increase in LOS of 63 minutes for admitted patients. This raises the question as to whether the potential detrimental effects to patient safety of CPOE implementation outweigh its benefits.


2020 ◽  
Vol 13 (1) ◽  
pp. 11-16
Author(s):  
Erkin N. Bilalov ◽  
Azamat F. Yusupov ◽  
Ahmadjon E. Nozimov ◽  
Okilkhon I. Oripov

The rationale of the research is driven by the severity of dry eye syndrome (DES) in the pterygium recurrencies development as well as by the necessity to investigate tear dysfunction and methods for its optimal correction in this patient population. Purpose of the study. To assess the impact of tear dysfunction indices on the development of recurrent pterygium. Materials and methods. We observed 60 patients (67 eyes) with recurrent pterygium. Patients were divided into four observation groups depending on the number of recurrencies. In order to study the dynamics of the DES manifestations during the postoperative period, pathogenetic therapy was used, which included a tear fluid substitute. All patients underwent a comprehensive assessment of subjective and objective DES indices before and after surgery. Results. A positive dynamics of subjective manifestations and objective indices of DES under the action of a tear substitute after surgery was reliably confirmed. A decrease in the number of patients with type III and IV crystallization after surgery was confirmed. Conclusion. The obtained data indicate an increase in the mucin content in the tear fluid composition, which leads to a stabilization of the tear film and to a decrease in the DES intensity.


2020 ◽  
Author(s):  
Antonio Leon Justel ◽  
Jose Ignacio Morgado Garcia-Polavieja ◽  
Ana Isabel Alvarez Rios ◽  
Francisco Jose Caro Fernandez ◽  
Pedro Agustin Pajaro Merino ◽  
...  

Abstract BACKGROUNDHeart failure (HF) is a major and growing medical and economic problem, with high prevalence and incidence rates worldwide. Cardiac Biomarker is emerging as a novel tool for improving management of patients with HF.METHODSThis is a real-world, before-and after-intervention trial, that assesses the impact of a personalized follow-up procedure for HF on patient’s outcomes and care associated cost, based on a clinical model of risk stratification and personalized management according to that risk. A total of 192 patients were enrolled and studied before and after an intervention. The primary objective was the rate of readmissions, due to a HF event, post-intervention compared to pre-intervention. Secondary outcomes compared the rate of ED visits and the number of patients who had reduced NYHA score pre and post-intervention. A cost- analysis was also performed on these data.RESULTSAdmission rates significantly decreased by 41% after the intervention (total length of stay was reduced by 55%). The rate of ED visits was reduced by 55%. Thirty-one percent of patients had an improved functional class score after the intervention, whereas only 7.8% got worse. The overall cost saving associated with the intervention was €139,717.65 for the whole group over 1 year.CONCLUSIONSA personalized follow-up of HF patients led to important outcome benefits and resulted in cost savings, mainly due to the reduction of patient hospitalization readmissions and a significant reduction of care- associated costs, suggesting that greater attention should be given to this high-risk cohort to minimize the risk of hospitalization readmissions.


2021 ◽  
Author(s):  
J Panovska-Griffiths ◽  
J Ross ◽  
S Elkhodair ◽  
C Baxter-Derrington ◽  
C Laing ◽  
...  

AbstractBackgroundWe compared impact of three pre-COVID-19 interventions and of the COVID-19 UK-epidemic and the first UK national lockdown on overcrowding within University College London Hospital Emergency Department (UCLH ED). The three interventions: target the influx of patients at ED (A), reduce the pressure on in-patients’ beds (B) and improve ED processes to improve the flow of patents out from ED (C).MethodsWe analysed the change in overcrowding metrics (daily attendances, the proportion of people leaving within four hours of arrival (four-hours target) and the reduction in overall waiting time) across three analysis. The first analysis used data 01/04/2017-31/12-2019 to calculate changes over a period of six months before and after the start of interventions A-C. The second and third analyses focused on evaluating the impact of the COVID-19 epidemic, comparing the first 10 months in 2020 and 2019, and of the first national lockdown (23/03/2020-31/05/2020).ResultsPre-COVID-19 all interventions led to small reductions in waiting time (17%, p<0.001 for A and C;9%, p=0.322 for B) but also to a small decrease in the number of patients leaving within four hours of arrival (6.6%,7.4%,6.2% respectively A-C,p<0.001).In presence of the COVID-19 pandemic, attendance and waiting time were reduced (40% and 8%;p<0.001), and the number of people leaving within four hours of arrival was increased (6%,p<0.001). During the first lockdown, there was 65% reduction in attendance, 22% reduction in waiting time and 8% increase in number of people leaving within 4 hours of arrival (p<0.001). Crucially, when the lockdown was lifted, there was an increase (6.5%,p<0.001) in the percentage of people leaving within four hours, together with a larger (12.5%,p<0.001) decrease in waiting time. This occurred despite the increase of 49.6%(p<0.001) in attendance after lockdown ended.ConclusionsThe mixed results pre-COVID-19 (significant improvements in waiting time with some interventions but not improvement in the four-hours target), may be due to a ‘spill-over effect’ where clogging up one part of the ED system affects other parts. Hence multifaceted interventions and a system-wide approach to improve the pathway of flow through the ED system is necessary.During 2020 and in presence of the COVID-19 epidemic, a shift in public behaviour with anxiety over attending hospitals and higher use of virtual consultations, led to notable drop in UCLH ED attendance and consequential curbing of overcrowding.Importantly, once the lockdown was lifted, although there was an increase in arrivals at UCLH ED, overcrowding metrics were reduced. Thus, the combination of shifted public behaviour and the restructuring changes during COVID-19 epidemic, maybe be able to curb future ED overcrowding, but longer timeframe analysis is required to confirm this.


Author(s):  
Anna C. Sick-Samuels ◽  
Sara Cosgrove ◽  
Clare Rock ◽  
Alejandra Salinas ◽  
Opeyemi Oladapo-Shittu ◽  
...  

Abstract Background: Healthcare workers (HCWs) not adhering to physical distancing recommendations is a risk factor for acquisition of severe acute respiratory coronavirus virus 2 (SARS-CoV-2). The study objective was to assess the impact of interventions to improve HCW physical distancing on actual distance between HCWs in a real-life setting. Methods: HCWs voluntarily wore proximity beacons to measure the number and intensity of physical distancing interactions between each other in a pediatric intensive care unit. We compared interactions before and after implementing a bundle of interventions including changes to the layout of workstations, cognitive aids, and individual feedback from wearable proximity beacons. Results: Overall, we recorded 10,788 interactions within 6 feet (∼2 m) and lasting >5 seconds. The number of HCWs wearing beacons fluctuated daily and increased over the study period. On average, 13 beacons were worn daily (32% of possible staff; range, 2–32 per day). We recorded 3,218 interactions before the interventions and 7,570 interactions after the interventions began. Using regression analysis accounting for the maximum number of potential interactions if all staff had worn beacons on a given day, there was a 1% decline in the number of interactions per possible interactions in the postintervention period (incident rate ratio, 0.99; 95% confidence interval, 0.98–1.00; P = .02) with fewer interactions occurring at nursing stations, in workrooms and during morning rounds. Conclusions: Using quantitative data from wearable proximity beacons, we found an overall small decline in interactions within 6 feet between HCWs in a busy intensive care unit after a multifaceted bundle of interventions was implemented to improve physical distancing.


Blood ◽  
2008 ◽  
Vol 112 (11) ◽  
pp. 4704-4704
Author(s):  
David A Hanauer ◽  
Sung W Choi ◽  
Robert W Beasley ◽  
Ronald B Hirschl ◽  
Douglas W Blayney

Abstract No data are available concerning the impact of CPOE on inpatient leukemia and lymphoma care. CPOE may improve patient safety, reduce time between order entry and medication administration, and reduce medication and transcription errors. However, concerns have arisen about potential increased time required to enter electronic orders compared to handwritten orders. Our hypothesis was that CPOE would require more order-related time from caregivers, and reduce the amount of time for direct patient care. We studied the work patterns of three Physician Assistants (PAs) who worked under the supervision of faculty physicians, and were the exclusive inpatient care providers. The PA-staffed hematology service was chosen to minimize the impact of rotating house staff on our results. Faculty, who were not studied, entered the few chemotherapy orders necessary, while PAs entered orders for hydration, antibiotics, supportive care and other medications, and for consultations and diagnostic tests. The UMHS Institutional Review Board reviewed the study protocol and waived the requirement for patient informed consent. We performed a direct observation time and motion study pre- and post-implementation of a commercial CPOE system (Sunrise Clinical Manager™ 4.5, Eclipsys, Boca Raton, Florida) on one inpatient hematology service at the UMHS University Hospital. The same three PAs were shadowed pre- and post-implementation. We also closely matched morning and afternoon observation times in order to reduce variability in activities taking place at different times of the day. Prior to CPOE implementation the PAs had a 4 hour general training session and a 1 hour chemotherapy training session. Pre-built order sets were routinely used by the PAs. A portable tablet computer was used by an independent observer to record data, using a data entry interface containing 63 individual activity categories modified from the Time and Motion database under “IT Tools” at http://www.ahrq.gov. Data were grouped into subcategories for analysis. We grouped 12 activities as ordering-related (e.g. writing orders, writing forms, clarifying orders, etc.) We observed the same three PAs for 85.4 hours (over 2 weeks) pre, and for 75.8 hours (over 4 weeks) starting 3 months post-CPOE. Mean patient census was 11.3 per day pre- and 9.2 per day post implementation observation periods. Overall time for order-related activities was unchanged, requiring 7.7% of total time pre- and 8.1% of total time post-CPOE even though actual order writing took longer with CPOE compared to written (4.9% pre vs. 7.0% post). CPOE had almost no impact on direct patient care time (Figure), with PAs spending 38.2% total time on direct patient care pre-CPOE compared to 38.4% post. A minimal difference was also found with the overall total for indirect patient care activities (37.1% pre vs. 38.7% post). Our results suggest that using CPOE on a busy hematology inpatient service has minimal impact on time spent by trained PAs using standard order sets 3 months after implementation. The decision to adopt CPOE for a busy hematology service should not be based on the hypothesis that there will be a change in workflow or task organization. More study is needed to determine if CPOE for hematology patients results in a change in the quality of patient care or safety. Figure. Percentage of total time spent in 6 analysis categories both before and after implementation of a commercial CPOE system for an inpatient hematology service. These 6 categories represent 63 individual activities categories that were recorded in the time and motion study. Error bars represent 95% confidence intervals. Figure. Percentage of total time spent in 6 analysis categories both before and after implementation of a commercial CPOE system for an inpatient hematology service. These 6 categories represent 63 individual activities categories that were recorded in the time and motion study. Error bars represent 95% confidence intervals.


2015 ◽  
Vol 2015 ◽  
pp. 1-6
Author(s):  
Danielle Creme ◽  
Kieran McCafferty

Objective. To identify the number of haemodialysis patients with diabetes in a large NHS Trust, their current glycaemic control, and the impact on other renal specific outcomes.Design. Retrospective, observational, cross-sectional study.Methods. Data was collected from an electronic patient management system. Glycaemic control was assessed from HbA1c results that were then further adjusted for albumin (Alb) and haemoglobin (Hb). Interdialytic weight gains were analysed from weights recorded before and after dialysis, 2 weeks before and after the most recent HbA1c date. Amputations were identified from electronic records.Results. 39% of patients had poor glycaemic control (HbA1c > 8%). Adjusted HbA1c resulted in a greater number of patients with poor control (55%). Significant correlations were found with interdialytic weight gains (P<0.02,r=0.14), predialysis sodium (P<0.0001,r=-1.9), and predialysis bicarbonate (P<0.02,r=0.12). Trends were observed with albumin and C-reactive protein. Patients with diabetes had more amputations (24 versus 2).Conclusion. Large number of diabetic patients on haemdialysis have poor glycaemic control. This may lead to higher interdialytic weight gains, larger sodium and bicarbonate shifts, increased number of amputations, and possibly increased inflammation and decreased nutritional status. Comprehensive guidelines and more accurate long-term tests for glycaemic control are needed.


Sign in / Sign up

Export Citation Format

Share Document