scholarly journals A Closer Look at the “Right” Format for Clinical Decision Support: Methods for Evaluating a Storyboard BestPractice Advisory

2020 ◽  
Vol 10 (4) ◽  
pp. 142
Author(s):  
Brian J. Douthit ◽  
R. Clayton Musser ◽  
Kay S. Lytle ◽  
Rachel L. Richesson

(1) Background: The five rights of clinical decision support (CDS) are a well-known framework for planning the nuances of CDS, but recent advancements have given us more options to modify the format of the alert. One-size-fits-all assessments fail to capture the nuance of different BestPractice Advisory (BPA) formats. To demonstrate a tailored evaluation methodology, we assessed a BPA after implementation of Storyboard for changes in alert fatigue, behavior influence, and task completion; (2) Methods: Data from 19 weeks before and after implementation were used to evaluate differences in each domain. Individual clinics were evaluated for task completion and compared for changes pre- and post-redesign; (3) Results: The change in format was correlated with an increase in alert fatigue, a decrease in erroneous free text answers, and worsened task completion at a system level. At a local level, however, 14% of clinics had improved task completion; (4) Conclusions: While the change in BPA format was correlated with decreased performance, the changes may have been driven primarily by the COVID-19 pandemic. The framework and metrics proposed can be used in future studies to assess the impact of new CDS formats. Although the changes in this study seemed undesirable in aggregate, some positive changes were observed at the level of individual clinics. Personalized implementations of CDS tools based on local need should be considered.

2019 ◽  
Author(s):  
Devin Mann ◽  
Adam Szerencsy ◽  
Leora Horwitz ◽  
Simon Jones ◽  
Masha Kuznetsova ◽  
...  

BACKGROUND Clinical decision support (CDS) is a valuable feature of electronic health records (EHRs) designed to improve quality and safety. However, due to the complexities of system design and inconsistent results, CDS tools may inadvertently increase alert fatigue and contribute to physician burnout. A/B testing, or rapid-cycle randomized tests, is a useful method that can be applied to the EHR in order to understand and iteratively improve design choices embedded within CDS tools. OBJECTIVE This paper describes how rapid randomized controlled trials (RCTs) embedded within EHRs can be used to quickly ascertain the superiority of potential CDS tools to improve their usability, reduce alert fatigue and promote quality of care. METHODS A multi-step process combining tools from user-centered design, A/B testing and implementation science is used to understand, ideate, prototype, test, analyze and improve each candidate CDS. CDS engagement metrics (alert views, ignores, orders) are used to evaluate which CDS version is superior. RESULTS Two experiments are highlighted to demonstrate the impact of the process. First, after multiple rounds of usability testing, a revised CDS influenza alert was tested against usual care in a rapid RCT. The new alert text resulted in minimal impact but the failure triggered another round of testing that identified key issues and led to a 70% reduction in alert volume in the next round. In the second experiment, the process was used to test three versions (financial, quality, regulatory) of text supporting tobacco cessation alerts as well as three supporting images. Three rounds of RCTs showed that the financial framing was 5-10% more effective than the other two but that adding images did not have a positive impact. CONCLUSIONS These data support the potential for this new process to rapidly develop, deploy and improve CDS within an EHR. This approach may be an important tool for improving the impact and experience of CDS. CLINICALTRIAL Our flu alert trial was registered in January 2018 with ClinicalTrials.gov, registration number NCT03415425. Our tobacco alert trial was registered in October 2018 with ClinicalTrials.gov, registration number NCT03714191.


2018 ◽  
Author(s):  
Sundas Khan ◽  
Safiya Richardson ◽  
Andrew Liu ◽  
Vinodh Mechery ◽  
Lauren McCullagh ◽  
...  

BACKGROUND Successful clinical decision support (CDS) tools can help use evidence-based medicine to effectively improve patient outcomes. However, the impact of these tools has been limited by low provider adoption due to overtriggering, leading to alert fatigue. We developed a tracking mechanism for monitoring trigger (percent of total visits for which the tool triggers) and adoption (percent of completed tools) rates of a complex CDS tool based on the Wells criteria for pulmonary embolism (PE). OBJECTIVE We aimed to monitor and evaluate the adoption and trigger rates of the tool and assess whether ongoing tool modifications would improve adoption rates. METHODS As part of a larger clinical trial, a CDS tool was developed using the Wells criteria to calculate pretest probability for PE at 2 tertiary centers’ emergency departments (EDs). The tool had multiple triggers: any order for D-dimer, computed tomography (CT) of the chest with intravenous contrast, CT pulmonary angiography (CTPA), ventilation-perfusion scan, or lower extremity Doppler ultrasound. A tracking dashboard was developed using Tableau to monitor real-time trigger and adoption rates. Based on initial low provider adoption rates of the tool, we conducted small focus groups with key ED providers to elicit barriers to tool use. We identified overtriggering of the tool for non-PE-related evaluations and inability to order CT testing for intermediate-risk patients. Thus, the tool was modified to allow CT testing for the intermediate-risk group and not to trigger for CT chest with intravenous contrast orders. A dialogue box, “Are you considering PE for this patient?” was added before the tool triggered to account for CTPAs ordered for aortic dissection evaluation. RESULTS In the ED of tertiary center 1, 95,295 patients visited during the academic year. The tool triggered for an average of 509 patients per month (average trigger rate 2036/30,234, 6.73%) before the modifications, reducing to 423 patients per month (average trigger rate 1629/31,361, 5.22%). In the ED of tertiary center 2, 88,956 patients visited during the academic year, with the tool triggering for about 473 patients per month (average trigger rate 1892/29,706, 6.37%) before the modifications and for about 400 per month (average trigger rate 1534/30,006, 5.12%) afterward. The modifications resulted in a significant 4.5- and 3-fold increase in provider adoption rates in tertiary centers 1 and 2, respectively. The modifications increased the average monthly adoption rate from 23.20/360 (6.5%) tools to 81.60/280.20 (29.3%) tools and 46.60/318.80 (14.7%) tools to 111.20/263.40 (42.6%) tools in centers 1 and 2, respectively. CONCLUSIONS Close postimplementation monitoring of CDS tools may help improve provider adoption. Adaptive modifications based on user feedback may increase targeted CDS with lower trigger rates, reducing alert fatigue and increasing provider adoption. Iterative improvements and a postimplementation monitoring dashboard can significantly improve adoption rates.


10.2196/16651 ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. e16651
Author(s):  
Jonathan Austrian ◽  
Felicia Mendoza ◽  
Adam Szerencsy ◽  
Lucille Fenelon ◽  
Leora I Horwitz ◽  
...  

Background Clinical decision support (CDS) is a valuable feature of electronic health records (EHRs) designed to improve quality and safety. However, due to the complexities of system design and inconsistent results, CDS tools may inadvertently increase alert fatigue and contribute to physician burnout. A/B testing, or rapid-cycle randomized tests, is a useful method that can be applied to the EHR in order to rapidly understand and iteratively improve design choices embedded within CDS tools. Objective This paper describes how rapid randomized controlled trials (RCTs) embedded within EHRs can be used to quickly ascertain the superiority of potential CDS design changes to improve their usability, reduce alert fatigue, and promote quality of care. Methods A multistep process combining tools from user-centered design, A/B testing, and implementation science was used to understand, ideate, prototype, test, analyze, and improve each candidate CDS. CDS engagement metrics (alert views, acceptance rates) were used to evaluate which CDS version is superior. Results To demonstrate the impact of the process, 2 experiments are highlighted. First, after multiple rounds of usability testing, a revised CDS influenza alert was tested against usual care CDS in a rapid (~6 weeks) RCT. The new alert text resulted in minimal impact on reducing firings per patients per day, but this failure triggered another round of review that identified key technical improvements (ie, removal of dismissal button and firings in procedural areas) that led to a dramatic decrease in firings per patient per day (23.1 to 7.3). In the second experiment, the process was used to test 3 versions (financial, quality, regulatory) of text supporting tobacco cessation alerts as well as 3 supporting images. Based on 3 rounds of RCTs, there was no significant difference in acceptance rates based on the framing of the messages or addition of images. Conclusions These experiments support the potential for this new process to rapidly develop, deploy, and rigorously evaluate CDS within an EHR. We also identified important considerations in applying these methods. This approach may be an important tool for improving the impact of and experience with CDS. Trial Registration Flu alert trial: ClinicalTrials.gov NCT03415425; https://clinicaltrials.gov/ct2/show/NCT03415425. Tobacco alert trial: ClinicalTrials.gov NCT03714191; https://clinicaltrials.gov/ct2/show/NCT03714191


Author(s):  
Mah Laka ◽  
Adriana Milazzo ◽  
Drew Carter ◽  
Tracy Merlin

IntroductionClinical decision support systems (CDSS) are being developed to support evidence-based antibiotic prescribing and reduce the risk of inappropriate or over-prescribing; however, adoption of CDSS into the health system is rarely sustained. We aimed to understand the implementation challenges at a macro (policymakers), meso (organizational) and micro-level (individual practices) to identify the drivers of CDSS non-adoption.MethodsWe have adopted a mixed-method study design which comprised of: (i) systematic review and meta-analysis to assess the impact of CDSS on appropriate antibiotic prescribing, (ii) Online survey of clinicians in Australia from hospitals and primary care to identify drivers of CDSS adoption and (iii) in-depth interviews with policymakers to evaluate policy-level challenges and opportunities to CDSS implementation.ResultsCDSS implementation can improve compliance with antibiotic prescribing guidelines, with a relative decrease in mortality, volume of antibiotic use and length of hospital stay. However, CDSS provision alone is not enough to achieve these benefits. Important predictors of clinicians’ perception regarding CDSS adoption include the seniority of clinical end-users (years), use of CDSS, and the care setting. Clinicians in primary care and those with significant clinical experience are less likely to use CDSS due to a lack of trust in the system, fear of comprising professional autonomy, and patients’ expectations. Lack of important policy considerations for CDSS integration into a multi-stakeholder healthcare system has limited the organizational capacity to foster change and align processes to support the innovation.ConclusionsThese results using multiple lines of evidence highlight the importance of a holistic approach when undertaking health technology management. There needs to be system-wide guidance that integrates individual, organizational and system-level factors when implementing CDSS so that effective antibiotic stewardship can be facilitated.


2021 ◽  
Vol 12 (02) ◽  
pp. 199-207
Author(s):  
Liang Yan ◽  
Thomas Reese ◽  
Scott D. Nelson

Abstract Objective Increasingly, pharmacists provide team-based care that impacts patient care; however, the extent of recent clinical decision support (CDS), targeted to support the evolving roles of pharmacists, is unknown. Our objective was to evaluate the literature to understand the impact of clinical pharmacists using CDS. Methods We searched MEDLINE, EMBASE, and Cochrane Central for randomized controlled trials, nonrandomized trials, and quasi-experimental studies which evaluated CDS tools that were developed for inpatient pharmacists as a target user. The primary outcome of our analysis was the impact of CDS on patient safety, quality use of medication, and quality of care. Outcomes were scored as positive, negative, or neutral. The secondary outcome was the proportion of CDS developed for tasks other than medication order verification. Study quality was assessed using the Newcastle–Ottawa Scale. Results Of 4,365 potentially relevant articles, 15 were included. Five studies were randomized controlled trials. All included studies were rated as good quality. Of the studies evaluating inpatient pharmacists using a CDS tool, four showed significantly improved quality use of medications, four showed significantly improved patient safety, and three showed significantly improved quality of care. Six studies (40%) supported expanded roles of clinical pharmacists. Conclusion These results suggest that CDS can support clinical inpatient pharmacists in preventing medication errors and optimizing pharmacotherapy. Moreover, an increasing number of CDS tools have been developed for pharmacists' roles outside of order verification, whereby further supporting and establishing pharmacists as leaders in safe and effective pharmacotherapy.


2019 ◽  
Vol 144 (7) ◽  
pp. 869-877 ◽  
Author(s):  
Marios A. Gavrielides ◽  
Meghan Miller ◽  
Ian S. Hagemann ◽  
Heba Abdelal ◽  
Zahra Alipour ◽  
...  

Context.— Clinical decision support (CDS) systems could assist less experienced pathologists with certain diagnostic tasks for which subspecialty training or extensive experience is typically needed. The effect of decision support on pathologist performance for such diagnostic tasks has not been examined. Objective.— To examine the impact of a CDS tool for the classification of ovarian carcinoma subtypes by pathology trainees in a pilot observer study using digital pathology. Design.— Histologic review on 90 whole slide images from 75 ovarian cancer patients was conducted by 6 pathology residents using: (1) unaided review of whole slide images, and (2) aided review, where in addition to whole slide images observers used a CDS tool that provided information about the presence of 8 histologic features important for subtype classification that were identified previously by an expert in gynecologic pathology. The reference standard of ovarian subtype consisted of majority consensus from a panel of 3 gynecologic pathology experts. Results.— Aided review improved pairwise concordance with the reference standard for 5 of 6 observers by 3.3% to 17.8% (for 2 observers, increase was statistically significant) and mean interobserver agreement by 9.2% (not statistically significant). Observers benefited the most when the CDS tool prompted them to look for missed histologic features that were definitive for a certain subtype. Observer performance varied widely across cases with unanimous and nonunanimous reference classification, supporting the need for balancing data sets in terms of case difficulty. Conclusions.— Findings showed the potential of CDS systems to close the knowledge gap between pathologists for complex diagnostic tasks.


2009 ◽  
Vol 18 (01) ◽  
pp. 84-95 ◽  
Author(s):  
A. Y. S. Lau ◽  
G. Tsafnat ◽  
V. Sintchenko ◽  
F. Magrabi ◽  
E. Coiera

Summary Objectives To review the recent research literature in clinical decision support systems (CDSS). Methods A review of recent literature was undertaken, focussing on CDSS evaluation, consumers and public health, the impact of translational bioinformatics on CDSS design, and CDSS safety. Results In recent years, researchers have concentrated much less on the development of decision technologies, and have focussed more on the impact of CDSS in the clinical world. Recent work highlights that traditional process measures of CDSS effectiveness, such as document relevance are poor proxy measures for decision outcomes. Measuring the dynamics of decision making, for example via decision velocity, may produce a more accurate picture of effectiveness. Another trend is the broadening of user base for CDSS beyond front line clinicians. Consumers are now a major focus for biomedical informatics, as are public health officials, tasked with detecting and managing disease outbreaks at a health system, rather than individual patient level. Bioinformatics is also changing the nature of CDSS. Apart from personalisation of therapy recommendations, translational bioinformatics is creating new challenges in the interpretation of the meaning of genetic data. Finally, there is much recent interest in the safety and effectiveness of computerised physicianorderentry (CPOE) systems, given that prescribing and administration errors are a significant cause of morbidity and mortality. Of note, there is still much controversy surrounding the contention that poorly designed, implemented or used CDSS may actually lead to harm. Conclusions CDSS research remains an active and evolving area of research, as CDSS penetrate more widely beyond their traditional domain into consumer decision support, and as decisions become more complex, for example by involving sequence level genetic data.


2018 ◽  
Vol 56 (7) ◽  
pp. 1063-1070 ◽  
Author(s):  
Enrique Rodriguez-Borja ◽  
Africa Corchon-Peyrallo ◽  
Esther Barba-Serrano ◽  
Celia Villalba Martínez ◽  
Arturo Carratala Calvo

Abstract Background: We assessed the impact of several “send & hold” clinical decision support rules (CDSRs) within the electronical request system for vitamins A, E, K, B1, B2, B3, B6 and C for all outpatients at a large health department. Methods: When ordered through electronical request, providers (except for all our primary care physicians who worked as a non-intervention control group) were always asked to answer several compulsory questions regarding main indication, symptomatology, suspected diagnosis, vitamin active treatments, etc., for each vitamin test using a drop-down list format. After samples arrival, tests were later put on hold internally by our laboratory information system (LIS) until review for their appropriateness was made by two staff pathologists according to the provided answers and LIS records (i.e. “send & hold”). The number of tests for each analyte was compared between the 10-month period before and after CDSRs implementation in both groups. Results: After implementation, vitamins test volumes decreased by 40% for vitamin A, 29% for vitamin E, 42% for vitamin K, 37% for vitamin B1, 85% for vitamin B2, 68% for vitamin B3, 65% for vitamin B6 and 59% for vitamin C (all p values 0.03 or lower except for vitamin B3), whereas in control group, the majority increased or remained stable. In patients with rejected vitamins, no new requests and/or adverse clinical outcome comments due to this fact were identified. Conclusions: “Send & hold” CDSRs are a promising informatics tool that can support in utilization management and enhance the pathologist’s leadership role as tests specialist.


2018 ◽  
Vol 26 (1) ◽  
pp. 37-43 ◽  
Author(s):  
Skye Aaron ◽  
Dustin S McEvoy ◽  
Soumi Ray ◽  
Thu-Trang T Hickman ◽  
Adam Wright

Abstract Background Rule-base clinical decision support alerts are known to malfunction, but tools for discovering malfunctions are limited. Objective Investigate whether user override comments can be used to discover malfunctions. Methods We manually classified all rules in our database with at least 10 override comments into 3 categories based on a sample of override comments: “broken,” “not broken, but could be improved,” and “not broken.” We used 3 methods (frequency of comments, cranky word list heuristic, and a Naïve Bayes classifier trained on a sample of comments) to automatically rank rules based on features of their override comments. We evaluated each ranking using the manual classification as truth. Results Of the rules investigated, 62 were broken, 13 could be improved, and the remaining 45 were not broken. Frequency of comments performed worse than a random ranking, with precision at 20 of 8 and AUC = 0.487. The cranky comments heuristic performed better with precision at 20 of 16 and AUC = 0.723. The Naïve Bayes classifier had precision at 20 of 17 and AUC = 0.738. Discussion Override comments uncovered malfunctions in 26% of all rules active in our system. This is a lower bound on total malfunctions and much higher than expected. Even for low-resource organizations, reviewing comments identified by the cranky word list heuristic may be an effective and feasible way of finding broken alerts. Conclusion Override comments are a rich data source for finding alerts that are broken or could be improved. If possible, we recommend monitoring all override comments on a regular basis.


Sign in / Sign up

Export Citation Format

Share Document