Transparent Reporting on Research Using Unstructured Electronic Health Record Data to Generate ‘Real World’ Evidence of Comparative Effectiveness and Safety

Drug Safety ◽  
2019 ◽  
Vol 42 (11) ◽  
pp. 1297-1309 ◽  
Author(s):  
Shirley V. Wang ◽  
Olga V. Patterson ◽  
Joshua J. Gagne ◽  
Jeffrey S. Brown ◽  
Robert Ball ◽  
...  
Medical Care ◽  
2013 ◽  
Vol 51 ◽  
pp. S30-S37 ◽  
Author(s):  
William R. Hersh ◽  
Mark G. Weiner ◽  
Peter J. Embi ◽  
Judith R. Logan ◽  
Philip R.O. Payne ◽  
...  

2020 ◽  
Vol 27 (7) ◽  
pp. 1173-1185 ◽  
Author(s):  
Seyedeh Neelufar Payrovnaziri ◽  
Zhaoyi Chen ◽  
Pablo Rengifo-Moreno ◽  
Tim Miller ◽  
Jiang Bian ◽  
...  

Abstract Objective To conduct a systematic scoping review of explainable artificial intelligence (XAI) models that use real-world electronic health record data, categorize these techniques according to different biomedical applications, identify gaps of current studies, and suggest future research directions. Materials and Methods We searched MEDLINE, IEEE Xplore, and the Association for Computing Machinery (ACM) Digital Library to identify relevant papers published between January 1, 2009 and May 1, 2019. We summarized these studies based on the year of publication, prediction tasks, machine learning algorithm, dataset(s) used to build the models, the scope, category, and evaluation of the XAI methods. We further assessed the reproducibility of the studies in terms of the availability of data and code and discussed open issues and challenges. Results Forty-two articles were included in this review. We reported the research trend and most-studied diseases. We grouped XAI methods into 5 categories: knowledge distillation and rule extraction (N = 13), intrinsically interpretable models (N = 9), data dimensionality reduction (N = 8), attention mechanism (N = 7), and feature interaction and importance (N = 5). Discussion XAI evaluation is an open issue that requires a deeper focus in the case of medical applications. We also discuss the importance of reproducibility of research work in this field, as well as the challenges and opportunities of XAI from 2 medical professionals’ point of view. Conclusion Based on our review, we found that XAI evaluation in medicine has not been adequately and formally practiced. Reproducibility remains a critical concern. Ample opportunities exist to advance XAI research in medicine.


Author(s):  
Mark J. Pletcher ◽  
Valy Fontil ◽  
Thomas Carton ◽  
Kathryn M. Shaw ◽  
Myra Smith ◽  
...  

Background: Uncontrolled blood pressure (BP) is a leading preventable cause of death that remains common in the US population despite the availability of effective medications. New technology and program innovation has high potential to improve BP but may be expensive and burdensome for patients, clinicians, health systems, and payers and may not produce desired results or reduce existing disparities in BP control. Methods and Results: The PCORnet Blood Pressure Control Laboratory is a platform designed to enable national surveillance and facilitate quality improvement and comparative effectiveness research. The platform uses PCORnet, the National Patient-Centered Clinical Research Network, for engagement of health systems and collection of electronic health record data, and the Eureka Research Platform for eConsent and collection of patient-reported outcomes and mHealth data from wearable devices and smartphones. Three demonstration projects are underway: BP track will conduct national surveillance of BP control and related clinical processes by measuring theory-derived pragmatic BP control metrics using electronic health record data, with a focus on tracking disparities over time; BP MAP will conduct a cluster-randomized trial comparing effectiveness of 2 versions of a BP control quality improvement program; BP Home will conduct an individual patient-level randomized trial comparing effectiveness of smartphone-linked versus standard home BP monitoring. Thus far, BP Track has collected electronic health record data from over 826 000 eligible patients with hypertension who completed ≈3.1 million ambulatory visits. Preliminary results demonstrate substantial room for improvement in BP control (<140/90 mm Hg), which was 58% overall, and in the clinical processes relevant for BP control. For example, only 12% of patients with hypertension with a high BP measurement during an ambulatory visit received an order for a new antihypertensive medication. Conclusions: The PCORnet Blood Pressure Control Laboratory is designed to be a reusable platform for efficient surveillance and comparative effectiveness research; results from demonstration projects are forthcoming.


2019 ◽  
Vol 28 (9) ◽  
pp. 1267-1277 ◽  
Author(s):  
Sarah‐Jo Sinnott ◽  
Liam Smeeth ◽  
Elizabeth Williamson ◽  
Pablo Perel ◽  
Dorothea Nitsch ◽  
...  

Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Dan Riskin ◽  
Keri L Monda ◽  
Ricardo Dent ◽  
A. Reshad Garan

Introduction: Real world evidence (RWE) is increasingly used for regulatory and market access decision-making. In heart failure (HF), typical structured datasets have limitations in data accuracy and identifying relevant patient characteristics. Understanding which characteristics require enhancement from unstructured data and how to validly apply extraction methods will improve the definition of complex patient cohorts. Hypothesis: Augmenting structured with unstructured electronic health record (EHR) data may overcome challenges in accurately identifying relevant HF patient characteristics. Methods: Using EHR data from 4,288 primary care encounters, 20 clinical concepts were defined a priori by 3 HF experts. A reference standard was generated through chart abstraction, with each record reviewed by at least two annotators. Inter-rater reliability (IRR) was measured by Cohen’s kappa. EHR structured data (EHR-S) extracted with traditional query techniques and EHR unstructured (EHR-U) data extracted with artificial intelligence (AI) technologies were tested for accuracy against the reference standard. Results: In EHR-S, recall ranged from 0-95.1% and precision from 52.9-100%. In EHR-U data processed using AI, recall ranged from 80.4-99.7% and precision from 82.3-100%. Results demonstrated a 45.1% absolute difference and 98.1% relative increase in F1-score (Table). Reference standard IRR was 95.3%. Conclusions: RWE credibility and applicability relies on accurate identification of a patient cohort. This study suggests that readily available data sources may not accurately identify patient phenotypes in HF. Novel means of using AI with EHR-U may improve such efforts, particularly for conditions and symptoms. This approach offers a pathway for defining highly accurate HF cohorts that may be useful in studies with narrowly defined or complex phenotypes, such as those where inclusion and exclusion criteria are specific and outcomes require validity.


Sign in / Sign up

Export Citation Format

Share Document