scholarly journals Alert dwell time: introduction of a measure to evaluate interruptive clinical decision support alerts

2015 ◽  
Vol 23 (e1) ◽  
pp. e138-e141 ◽  
Author(s):  
Robert B McDaniel ◽  
Jonathan D Burlison ◽  
Donald K Baker ◽  
Murad Hasan ◽  
Jennifer Robertson ◽  
...  

Abstract Metrics for evaluating interruptive prescribing alerts have many limitations. Additional methods are needed to identify opportunities to improve alerting systems and prevent alert fatigue. In this study, the authors determined whether alert dwell time—the time elapsed from when an interruptive alert is generated to when it is dismissed—could be calculated by using historical alert data from log files. Drug–drug interaction (DDI) alerts from 3 years of electronic health record data were queried. Alert dwell time was calculated for 25,965 alerts, including 777 unique DDIs. The median alert dwell time was 8 s (range, 1–4913 s). Resident physicians had longer median alert dwell times than other prescribers ( P <  .001). The 10 most frequent DDI alerts ( n =  8759 alerts) had shorter median dwell times than alerts that only occurred once ( P <  .001). This metric can be used in future research to evaluate the effectiveness and efficiency of interruptive prescribing alerts.

Author(s):  
Jessica M Schwartz ◽  
Amanda J Moy ◽  
Sarah C Rossetti ◽  
Noémie Elhadad ◽  
Kenrick D Cato

Abstract Objective The study sought to describe the prevalence and nature of clinical expert involvement in the development, evaluation, and implementation of clinical decision support systems (CDSSs) that utilize machine learning to analyze electronic health record data to assist nurses and physicians in prognostic and treatment decision making (ie, predictive CDSSs) in the hospital. Materials and Methods A systematic search of PubMed, CINAHL, and IEEE Xplore and hand-searching of relevant conference proceedings were conducted to identify eligible articles. Empirical studies of predictive CDSSs using electronic health record data for nurses or physicians in the hospital setting published in the last 5 years in peer-reviewed journals or conference proceedings were eligible for synthesis. Data from eligible studies regarding clinician involvement, stage in system design, predictive CDSS intention, and target clinician were charted and summarized. Results Eighty studies met eligibility criteria. Clinical expert involvement was most prevalent at the beginning and late stages of system design. Most articles (95%) described developing and evaluating machine learning models, 28% of which described involving clinical experts, with nearly half functioning to verify the clinical correctness or relevance of the model (47%). Discussion Involvement of clinical experts in predictive CDSS design should be explicitly reported in publications and evaluated for the potential to overcome predictive CDSS adoption challenges. Conclusions If present, clinical expert involvement is most prevalent when predictive CDSS specifications are made or when system implementations are evaluated. However, clinical experts are less prevalent in developmental stages to verify clinical correctness, select model features, preprocess data, or serve as a gold standard.


2019 ◽  
Vol 5 ◽  
pp. 237796081985097
Author(s):  
Reba Umberger ◽  
Chayawat “Yo” Indranoi ◽  
Melanie Simpson ◽  
Rose Jensen ◽  
James Shamiyeh ◽  
...  

Clinical research in sepsis patients often requires gathering large amounts of longitudinal information. The electronic health record can be used to identify patients with sepsis, improve participant study recruitment, and extract data. The process of extracting data in a reliable and usable format is challenging, despite standard programming language. The aims of this project were to explore infrastructures for capturing electronic health record data and to apply criteria for identifying patients with sepsis. We conducted a prospective feasibility study to locate and capture/abstract electronic health record data for future sepsis studies. We located parameters as displayed to providers within the system and then captured data transmitted in Health Level Seven® interfaces between electronic health record systems into a prototype database. We evaluated our ability to successfully identify patients admitted with sepsis in the target intensive care unit (ICU) at two cross-sectional time points and then over a 2-month period. A majority of the selected parameters were accessible using an iterative process to locate and abstract them to the prototype database. We successfully identified patients admitted to a 20-bed ICU with sepsis using four data interfaces. Retrospectively applying similar criteria to data captured for 319 patients admitted to ICU over a 2-month period was less sensitive in identifying patients admitted directly to the ICU with sepsis. Classification into three admission categories (sepsis, no-sepsis, and other) was fair (Kappa .39) when compared with manual chart review. This project confirms reported barriers in data extraction. Data can be abstracted for future research, although more work is needed to refine and create customizable reports. We recommend that researchers engage their information technology department to electronically apply research criteria for improved research screening at the point of ICU admission. Using clinical electronic health records data to classify patients with sepsis over time is complex and challenging.


2017 ◽  
Vol 27 (11) ◽  
pp. 3271-3285 ◽  
Author(s):  
Grant B Weller ◽  
Jenna Lovely ◽  
David W Larson ◽  
Berton A Earnshaw ◽  
Marianne Huebner

Hospital-specific electronic health record systems are used to inform clinical practice about best practices and quality improvements. Many surgical centers have developed deterministic clinical decision rules to discover adverse events (e.g. postoperative complications) using electronic health record data. However, these data provide opportunities to use probabilistic methods for early prediction of adverse health events, which may be more informative than deterministic algorithms. Electronic health record data from a set of 9598 colorectal surgery cases from 2010 to 2014 were used to predict the occurrence of selected complications including surgical site infection, ileus, and bleeding. Consistent with previous studies, we find a high rate of missing values for both covariates and complication information (4–90%). Several machine learning classification methods are trained on an 80% random sample of cases and tested on a remaining holdout set. Predictive performance varies by complication, although an area under the receiver operating characteristic curve as high as 0.86 on testing data was achieved for bleeding complications, and accuracy for all complications compares favorably to existing clinical decision rules. Our results confirm that electronic health records provide opportunities for improved risk prediction of surgical complications; however, consideration of data quality and consistency standards is an important step in predictive modeling with such data.


2011 ◽  
Vol 4 (0) ◽  
Author(s):  
Michael Klompas ◽  
Chaim Kirby ◽  
Jason McVetta ◽  
Paul Oppedisano ◽  
John Brownstein ◽  
...  

Author(s):  
José Carlos Ferrão ◽  
Mónica Duarte Oliveira ◽  
Daniel Gartner ◽  
Filipe Janela ◽  
Henrique M. G. Martins

Sign in / Sign up

Export Citation Format

Share Document