scholarly journals Evaluating a handheld decision support device in pediatric intensive care settings

JAMIA Open ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 49-61
Author(s):  
Tera L Reynolds ◽  
Patricia R DeLucia ◽  
Karen A Esquibel ◽  
Todd Gage ◽  
Noah J Wheeler ◽  
...  

Abstract Objective To evaluate end-user acceptance and the effect of a commercial handheld decision support device in pediatric intensive care settings. The technology, pac2, was designed to assist nurses in calculating medication dose volumes and infusion rates at the bedside. Materials and Methods The devices, manufactured by InformMed Inc., were deployed in the pediatric and neonatal intensive care units in 2 health systems. This mixed methods study assessed end-user acceptance, as well as pac2’s effect on the cognitive load associated with bedside dose calculations and the rate of administration errors. Towards this end, data were collected in both pre- and postimplementation phases, including through ethnographic observations, semistructured interviews, and surveys. Results Although participants desired a handheld decision support tool such as pac2, their use of pac2 was limited. The nature of the critical care environment, nurses’ risk perceptions, and the usability of the technology emerged as major barriers to use. Data did not reveal significant differences in cognitive load or administration errors after pac2 was deployed. Discussion and Conclusion Despite its potential for reducing adverse medication events, the commercial standalone device evaluated in the study was not used by the nursing participants and thus had very limited effect. Our results have implications for the development and deployment of similar mobile decision support technologies. For example, they suggest that integrating the technology into hospitals’ existing IT infrastructure and employing targeted implementation strategies may facilitate nurse acceptance. Ultimately, the usability of the design will be essential to reaping any potential benefits.

2016 ◽  
Vol 25 (6) ◽  
pp. 479-486 ◽  
Author(s):  
Stacy Hevener ◽  
Barbara Rickabaugh ◽  
Toby Marsh

Background Little information is available on the use of tools in intensive care units to help nurses determine when to restrain a patient. Patients in medical-surgical intensive care units are often restrained for their safety to prevent them from removing therapeutic devices. Research indicates that restraints do not necessarily prevent injuries or removal of devices by patients. Objectives To decrease use of restraints in a medical-surgical intensive care unit and to determine if a decision support tool is useful in helping bedside nurses determine whether or not to restrain a patient. Methods A quasi-experimental study design was used for this pilot study. Data were collected for each patient each shift indicating if therapeutic devices were removed and if restraints were used. An online educational activity supplemented by 1-on-1 discussion about proper use of restraints, alternatives, and use of a restraint decision tool was provided. Frequency of restraint use was determined. Descriptive statistics and thematic analysis were used to examine nurses’ perceptions of the decision support tool. Results Use of restraints was reduced 32%. No unplanned extubations or disruption of life-threatening therapeutic devices by unrestrained patients occurred. Conclusions With implementation of the decision support tool, nurses decreased their use of restraints yet maintained patients’ safety. A decision support tool may help nurses who are undecided or who need reassurance on their decision to restrain or not restrain a patient.


2020 ◽  
Author(s):  
Lisa Strifler ◽  
Jan M. Barnsley ◽  
Michael Hillmer ◽  
Sharon E. Straus

Abstract Background: Implementation theories, models and frameworks offer guidance when implementing and sustaining healthcare evidence-based interventions. However, selection can be challenging given the myriad of potential options. We propose to develop a decision support tool to facilitate the appropriate selection of an implementation theory, model or framework in practice. To inform tool development, this study aimed to explore barriers and facilitators to identifying and selecting implementation theories, models and frameworks in research and practice, as well as end-user preferences for features and functions of the proposed tool.Methods: We used an interpretive descriptive approach to conduct semi-structured interviews with implementation researchers and practitioners in Canada, the United States and Australia. Audio recordings were transcribed verbatim. Data were inductively coded by a single investigator with a subset of 20% coded independently by a second investigator and analyzed using thematic analysis.Results: Twenty-four individuals participated in the study. Categories of barriers/facilitators, to inform tool development, included characteristics of the individual or team conducting implementation and characteristics of the implementation theory, model or framework. Major barriers to selection included inconsistent terminology, poor fit with the implementation context and limited knowledge about and training in existing theories, models and frameworks. Major facilitators to selection included the importance of clear and concise language and evidence that the theory, model or framework was applied in a relevant health setting or context. Participants were enthusiastic about the development of a decision support tool that is user-friendly, accessible and practical. Preferences for tool features included key questions about the implementation intervention or project (e.g., purpose, stage of implementation, intended target for change) and a comprehensive list of relevant theories, models and frameworks to choose from along with a glossary of terms and the contexts in which they were applied.Conclusions: An easy to use decision support tool that addresses key barriers to selecting an implementation theory, model or framework in practice may be beneficial to individuals who facilitate implementation practice activities. Findings on end-user preferences for tool features and functions will inform tool development and design through a user-centered approach.


Critical Care ◽  
2020 ◽  
Vol 24 (1) ◽  
Author(s):  
Christopher Bourdeaux ◽  
Erina Ghosh ◽  
Louis Atallah ◽  
Krishnamoorthy Palanisamy ◽  
Payaal Patel ◽  
...  

Abstract Background Acute kidney injury (AKI) affects a large proportion of the critically ill and is associated with worse patient outcomes. Early identification of AKI can lead to earlier initiation of supportive therapy and better management. In this study, we evaluate the impact of computerized AKI decision support tool integrated with the critical care clinical information system (CCIS) on patient outcomes. Specifically, we hypothesize that integration of AKI guidelines into CCIS will decrease the proportion of patients with Stage 1 AKI deteriorating into higher stages of AKI. Methods The study was conducted in two intensive care units (ICUs) at University Hospitals Bristol, UK, in a before (control) and after (intervention) format. The intervention consisted of the AKIN guidelines and AKI care bundle which included guidance for medication usage, AKI advisory and dashboard with AKI score. Clinical data and patient outcomes were collected from all patients admitted to the units. AKI stage was calculated using the Acute Kidney Injury Network (AKIN) guidelines. Maximum AKI stage per admission, change in AKI stage and other metrics were calculated for the cohort. Adherence to eGFR-based enoxaparin dosing guidelines was evaluated as a proxy for clinician awareness of AKI. Results Each phase of the study lasted a year, and a total of 5044 admissions were included for analysis with equal numbers of patients for the control and intervention stages. The proportion of patients worsening from Stage 1 AKI decreased from 42% (control) to 33.5% (intervention), p = 0.002. The proportion of incorrect enoxaparin doses decreased from 1.72% (control) to 0.6% (intervention), p < 0.001. The prevalence of any AKI decreased from 43.1% (control) to 37.5% (intervention), p < 0.05. Conclusions This observational study demonstrated a significant reduction in AKI progression from Stage 1 and a reduction in overall development of AKI. In addition, a reduction in incorrect enoxaparin dosing was also observed, indicating increased clinical awareness. This study demonstrates that AKI guidelines coupled with a newly designed AKI care bundle integrated into CCIS can impact patient outcomes positively.


2020 ◽  
Author(s):  
Lisa Strifler ◽  
Jan M. Barnsley ◽  
Michael Hillmer ◽  
Sharon E. Straus

Abstract Background: Implementation theories, models and frameworks offer guidance when implementing and sustaining healthcare evidence-based interventions. However, selection can be challenging given the myriad of potential options. We propose to inform a decision support tool to facilitate the appropriate selection of an implementation theory, model or framework in practice. To inform tool development, this study aimed to explore barriers and facilitators to identifying and selecting implementation theories, models and frameworks in research and practice, as well as end-user preferences for features and functions of the proposed tool.Methods: We used an interpretive descriptive approach to conduct semi-structured interviews with implementation researchers and practitioners in Canada, the United States and Australia. Audio recordings were transcribed verbatim. Data were inductively coded by a single investigator with a subset of 20% coded independently by a second investigator and analyzed using thematic analysis.Results: Twenty-four individuals participated in the study. Categories of barriers/facilitators, to inform tool development, included characteristics of the individual or team conducting implementation and characteristics of the implementation theory, model or framework. Major barriers to selection included inconsistent terminology, poor fit with the implementation context and limited knowledge about and training in existing theories, models and frameworks. Major facilitators to selection included the importance of clear and concise language and evidence that the theory, model or framework was applied in a relevant health setting or context. Participants were enthusiastic about the development of a decision support tool that is user-friendly, accessible and practical. Preferences for tool features included key questions about the implementation intervention or project (e.g., purpose, stage of implementation, intended target for change) and a comprehensive list of relevant theories, models and frameworks to choose from along with a glossary of terms and the contexts in which they were applied.Conclusions: An easy to use decision support tool that addresses key barriers to selecting an implementation theory, model or framework in practice may be beneficial to individuals who facilitate implementation practice activities. Findings on end-user preferences for tool features and functions will inform tool development and design through a user-centered approach.


2021 ◽  
Author(s):  
Alex Rigby ◽  
Sopan Patil ◽  
Panagiotis Ritsos

&lt;p&gt;Land Use Land Cover (LULC) change is widely recognised as one of the most important factors impacting river basin hydrology.&amp;#160; It is therefore imperative that the hydrological impacts of various LULC changes are considered for effective flood management strategies and future infrastructure decisions within a catchment. &amp;#160;The Soil and Water assessment Tool (SWAT) has been used extensively to assess the hydrological impacts of LULC change.&amp;#160; Areas with assumed homogeneous hydrologic properties, based on their LULC, soil type and slope, make up the basic computational units of SWAT known as the Hydrologic Response Units (HRUs).&amp;#160; LULC changes in a catchment are typically modelled by SWAT through alterations to the input files that define the properties of these HRUs. &amp;#160;However, to our knowledge at least, the process of making such changes to the SWAT input files is often cumbersome and non-intuitive.&amp;#160; This affects the useability of SWAT as a decision support tool amongst a wider pool of applied users (e.g., engineering teams in environmental regulatory agencies and local authorities).&amp;#160; In this study, we seek to address this issue by developing a user-friendly toolkit that will: (1) allow the end user to specify, through a Graphical User Interface (GUI), various types of LULC changes at multiple locations within their study catchment, (2) run the SWAT+ model (the latest version of SWAT) with the specified LULC changes, and (3) enable interactive visualisation of the different SWAT+ output variables to quantify the hydrological impacts of these scenarios.&amp;#160; Importantly, our toolkit does not require the end user to have any operational knowledge of the SWAT+ model to use it as a decision support tool.&amp;#160; Our toolkit will be trialled at 15 catchments in Gwynedd county, Wales, which has experienced multiple occurrences of high flood events, and consequent economic damage, in the recent past.&amp;#160; We anticipate this toolkit to be a valuable addition to the decision-making processes of Gwynedd County Council for the planning and development of future flood alleviation schemes as well as other infrastructure projects.&lt;/p&gt;


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S61-S61
Author(s):  
Anna Sick-Samuels ◽  
Jules Bergmann ◽  
Matthew Linz ◽  
James Fackler ◽  
Sean Berenholtz ◽  
...  

Abstract Background Clinicians obtain endotracheal aspirate (ETA) cultures from mechanically ventilated patients in the pediatric intensive care unit (PICU) for the evaluation of ventilator-associated infection (i.e., tracheitis or pneumonia). Positive cultures prompt clinicians to treat with antibiotics even though ETA cultures cannot distinguish bacterial colonization from infection. We undertook a quality improvement initiative to standardize the use of endotracheal cultures in the evaluation of ventilator-associated infections among hospitalized children. Methods A multidisciplinary team developed a clinical decision support algorithm to guide when to obtain ETA cultures from patients admitted to the PICU and ventilated for >1 day. We disseminated the algorithm to all bedside providers in the PICU in April 2018 and compared the rate of cultures one year before and after the intervention using Poisson regression and a quasi-experimental interrupted time-series models. Charge savings were estimated based on $220 average charge for one ETA culture. Results In the pre-intervention period, there was an average of 46 ETA cultures per month, a total of 557 cultures over 5,092 ventilator-days; after introduction of the algorithm, there were 19 cultures obtained per month, a total of 231 cultures over 3,554 ventilator-days (incident rate 10.9 vs. 6.5 per 100 ventilator-days, Figure 1). There was a 43% decrease in the monthly rate of cultures (IRR 0.57, 95% CI 0.50–0.67, P < 0.001). The ITSA revealed a pre-existing 2% decline in the monthly culture rate (IRR 0.98, 95% CI 0.97–1.00, P = 0.01), an immediate 44% drop (IRR 0.56, 95% CI 0.45–0.69, P = 0.02) and a stable rate in the post-intervention period (IRR 1.03, 95% CI 0.99–1.07, P = 0.09). The intervention led to an estimated $6000 in monthly charge savings. Conclusion Introduction of a clinical decision support algorithm to standardize the obtainment of ETA cultures from ventilated children was associated with a significant decline in the rate of ETA cultures. Additional investigation will assess the impact on balancing measures and secondary outcomes including mortality, duration of ventilation, duration of admission, readmissions, and antibiotic prescribing. Disclosures All Authors: No reported Disclosures.


2020 ◽  
Vol 41 (S1) ◽  
pp. s126-s127
Author(s):  
Sonya Kothadia ◽  
Samantha Blank ◽  
Tania Campagnoli ◽  
Mhd Hashem Rajabbik ◽  
Tiffany Wiksten ◽  
...  

Background: In an effort to reduce inappropriate testing of hospital-onset Clostridioides difficile infection (HO-CDI), we sequentially implemented 2 strategies: an electronic health record-based clinical decision support tool that alerted ordering physicians about potentially inappropriate testing without a hard stop (intervention period 1), replaced by mandatory infectious diseases attending physician approval for any HO-CDI test order (intervention period 2). We analyzed appropriate HO-CDI testing rates of both intervention periods. Methods: We performed a retrospective study of patients 18 years or older who had an HO-CDI test (performed after hospital day 3) during 3 different periods: baseline (no intervention, September 2014–February 2015), intervention 1 (clinical decision support tool only, April 2015–September 2015), and intervention 2 (ID approval only, December 2017–September 2018). From each of the 3 periods, we randomly selected 150 patients who received HO-CDI testing (450 patients total). We restricted the study to the general medicine, bone marrow transplant, medical intensive care, and neurosurgical intensive care units. We assessed each HO-CDI test for appropriateness (see Table 1 for criteria), and we compared rates of appropriateness using the χ2 test or Kruskall-Wallis test, where appropriate. Results: In our cohort of 450 patients, the median age was 61 years, and the median hospital length of stay was 20 days. The median hospital day that HO-CDI testing was performed differed among the 3 groups: 12 days at baseline, 10 days during intervention 1, and 8.5 days during intervention 2 (P < .001). Appropriateness of HO-CDI testing increased from the baseline with both interventions, but mandatory ID approval was associated with the highest rate of testing appropriateness (Fig. 1). Reasons for inappropriate ordering did not differ among the periods, with <3 documented stools being the most common criterion for inappropriateness. During intervention 2, among the 33 inappropriate tests, 8 (24%) occurred where no approval from an ID attending was recorded. HO-CDI test positivity rates during the 3 time periods were 12%, 11%, and 21%, respectively (P = .03). Conclusions: We found that both the clinical decision support tool and mandatory ID attending physician approval interventions improved appropriateness of HO-CDI testing. Mandatory ID attending physician approval leading to the highest appropriateness rate. Even with mandatory ID attending physician approval, some tests continued to be ordered inappropriately per retrospective chart review; we suspect that this is partly explained by underdocumentation of criteria such as stool frequency. In healthcare settings where appropriateness of HO-CDI testing is not optimal, mandatory ID attending physician approval may provide an option beyond clinical decision-support tools.Funding: NoneDisclosures: None


Sign in / Sign up

Export Citation Format

Share Document