Use of a Decision Support Tool to Predict and Plan the Safe Inter-Hospital Transfer of a University Affiliated Trauma Hospital Intensive Care Unit

Author(s):  
P.G. O'Callaghan ◽  
J. Kernick ◽  
M. Gowland ◽  
T. Jones
2016 ◽  
Vol 25 (6) ◽  
pp. 479-486 ◽  
Author(s):  
Stacy Hevener ◽  
Barbara Rickabaugh ◽  
Toby Marsh

Background Little information is available on the use of tools in intensive care units to help nurses determine when to restrain a patient. Patients in medical-surgical intensive care units are often restrained for their safety to prevent them from removing therapeutic devices. Research indicates that restraints do not necessarily prevent injuries or removal of devices by patients. Objectives To decrease use of restraints in a medical-surgical intensive care unit and to determine if a decision support tool is useful in helping bedside nurses determine whether or not to restrain a patient. Methods A quasi-experimental study design was used for this pilot study. Data were collected for each patient each shift indicating if therapeutic devices were removed and if restraints were used. An online educational activity supplemented by 1-on-1 discussion about proper use of restraints, alternatives, and use of a restraint decision tool was provided. Frequency of restraint use was determined. Descriptive statistics and thematic analysis were used to examine nurses’ perceptions of the decision support tool. Results Use of restraints was reduced 32%. No unplanned extubations or disruption of life-threatening therapeutic devices by unrestrained patients occurred. Conclusions With implementation of the decision support tool, nurses decreased their use of restraints yet maintained patients’ safety. A decision support tool may help nurses who are undecided or who need reassurance on their decision to restrain or not restrain a patient.


2021 ◽  
Vol 6 ◽  
pp. 183
Author(s):  
Emily Simon Thomas ◽  
Bryony Peiris ◽  
Leon Di Stefano ◽  
Matthew J. Rowland ◽  
Dominic Wilkinson

Background: At the start of the coronavirus disease 2019 (COVID-19) pandemic there was widespread concern about potentially overwhelming demand for intensive care and the need for intensive care unit (ICU) triage. In March 2020, a draft United Kingdom (UK) guideline proposed a decision-support tool (DST). We sought to evaluate the accuracy of the tool in patients with COVID-19. Methods: We retrospectively identified patients in two groups: referred and not referred to intensive care in a single UK national health service (NHS) trust in April 2020. Age, Clinical Frailty Scale score (CFS), and co-morbidities were collected from patients’ records and recorded, along with ceilings of treatment and outcome. We compared the DST, CFS, and age alone as predictors of mortality, and treatment ceiling decisions. Results: In total, 151 patients were included in the analysis, with 75 in the ICU and 76 in the non-ICU-reviewed groups. Age, clinical frailty and DST score were each associated with increased mortality and higher likelihood of treatment limitation (p-values all <.001). A DST cut-off score of >8 had 65% (95% confidence interval (CI) 51%-79%) sensitivity and 63% (95% CI 54%-72%) specificity for predicting mortality. It had a sensitivity of 80% (70%-88%) and specificity of 96% (95% CI 90%-100%) for predicting treatment limitation. The DST was more discriminative than age alone (p<0.001), and potentially more discriminative than CFS (p=0.08) for predicting treatment ceiling decisions. Conclusions: During the first wave of the COVID-19 pandemic, in a hospital without severe resource limitations, a hypothetical decision support tool was limited in its predictive value for mortality, but appeared to be sensitive and specific for predicting treatment limitation.


Critical Care ◽  
2020 ◽  
Vol 24 (1) ◽  
Author(s):  
Christopher Bourdeaux ◽  
Erina Ghosh ◽  
Louis Atallah ◽  
Krishnamoorthy Palanisamy ◽  
Payaal Patel ◽  
...  

Abstract Background Acute kidney injury (AKI) affects a large proportion of the critically ill and is associated with worse patient outcomes. Early identification of AKI can lead to earlier initiation of supportive therapy and better management. In this study, we evaluate the impact of computerized AKI decision support tool integrated with the critical care clinical information system (CCIS) on patient outcomes. Specifically, we hypothesize that integration of AKI guidelines into CCIS will decrease the proportion of patients with Stage 1 AKI deteriorating into higher stages of AKI. Methods The study was conducted in two intensive care units (ICUs) at University Hospitals Bristol, UK, in a before (control) and after (intervention) format. The intervention consisted of the AKIN guidelines and AKI care bundle which included guidance for medication usage, AKI advisory and dashboard with AKI score. Clinical data and patient outcomes were collected from all patients admitted to the units. AKI stage was calculated using the Acute Kidney Injury Network (AKIN) guidelines. Maximum AKI stage per admission, change in AKI stage and other metrics were calculated for the cohort. Adherence to eGFR-based enoxaparin dosing guidelines was evaluated as a proxy for clinician awareness of AKI. Results Each phase of the study lasted a year, and a total of 5044 admissions were included for analysis with equal numbers of patients for the control and intervention stages. The proportion of patients worsening from Stage 1 AKI decreased from 42% (control) to 33.5% (intervention), p = 0.002. The proportion of incorrect enoxaparin doses decreased from 1.72% (control) to 0.6% (intervention), p < 0.001. The prevalence of any AKI decreased from 43.1% (control) to 37.5% (intervention), p < 0.05. Conclusions This observational study demonstrated a significant reduction in AKI progression from Stage 1 and a reduction in overall development of AKI. In addition, a reduction in incorrect enoxaparin dosing was also observed, indicating increased clinical awareness. This study demonstrates that AKI guidelines coupled with a newly designed AKI care bundle integrated into CCIS can impact patient outcomes positively.


2020 ◽  
Vol 41 (S1) ◽  
pp. s126-s127
Author(s):  
Sonya Kothadia ◽  
Samantha Blank ◽  
Tania Campagnoli ◽  
Mhd Hashem Rajabbik ◽  
Tiffany Wiksten ◽  
...  

Background: In an effort to reduce inappropriate testing of hospital-onset Clostridioides difficile infection (HO-CDI), we sequentially implemented 2 strategies: an electronic health record-based clinical decision support tool that alerted ordering physicians about potentially inappropriate testing without a hard stop (intervention period 1), replaced by mandatory infectious diseases attending physician approval for any HO-CDI test order (intervention period 2). We analyzed appropriate HO-CDI testing rates of both intervention periods. Methods: We performed a retrospective study of patients 18 years or older who had an HO-CDI test (performed after hospital day 3) during 3 different periods: baseline (no intervention, September 2014–February 2015), intervention 1 (clinical decision support tool only, April 2015–September 2015), and intervention 2 (ID approval only, December 2017–September 2018). From each of the 3 periods, we randomly selected 150 patients who received HO-CDI testing (450 patients total). We restricted the study to the general medicine, bone marrow transplant, medical intensive care, and neurosurgical intensive care units. We assessed each HO-CDI test for appropriateness (see Table 1 for criteria), and we compared rates of appropriateness using the χ2 test or Kruskall-Wallis test, where appropriate. Results: In our cohort of 450 patients, the median age was 61 years, and the median hospital length of stay was 20 days. The median hospital day that HO-CDI testing was performed differed among the 3 groups: 12 days at baseline, 10 days during intervention 1, and 8.5 days during intervention 2 (P < .001). Appropriateness of HO-CDI testing increased from the baseline with both interventions, but mandatory ID approval was associated with the highest rate of testing appropriateness (Fig. 1). Reasons for inappropriate ordering did not differ among the periods, with <3 documented stools being the most common criterion for inappropriateness. During intervention 2, among the 33 inappropriate tests, 8 (24%) occurred where no approval from an ID attending was recorded. HO-CDI test positivity rates during the 3 time periods were 12%, 11%, and 21%, respectively (P = .03). Conclusions: We found that both the clinical decision support tool and mandatory ID attending physician approval interventions improved appropriateness of HO-CDI testing. Mandatory ID attending physician approval leading to the highest appropriateness rate. Even with mandatory ID attending physician approval, some tests continued to be ordered inappropriately per retrospective chart review; we suspect that this is partly explained by underdocumentation of criteria such as stool frequency. In healthcare settings where appropriateness of HO-CDI testing is not optimal, mandatory ID attending physician approval may provide an option beyond clinical decision-support tools.Funding: NoneDisclosures: None


Sign in / Sign up

Export Citation Format

Share Document