scholarly journals Deep LDL-EHR: Real-time Routine Clinical Application of Deep Neural Network for Estimating Low-Density Lipoprotein Cholesterol on Electronic Health Record (Preprint)

2021 ◽  
Author(s):  
Young Uh

BACKGROUND Previously, we constructedWe applied the LDL-DNN model to an electronic health record (EHR) system in real time (deep LDL-EHR). a deep neural network (DNN) model for estimating low-density lipoprotein (LDL) cholesterol (LDL-DNN). OBJECTIVE We applied the LDL-DNN model to an electronic health record (EHR) system in real time (deep LDL-EHR). METHODS The Korea National Health and Nutrition Examination Survey and the Wonju Severance Christian Hospital (WSCH) datasets were used as training and testing datasets, respectively. We measured the model’s performance by using four indices, including bias, root mean square error, P10 to P30, and concordance. For transfer learning (TL), we pre-trained the DNN model using a training dataset, and fine-tuned it using 30% of the testing datasets. RESULTS Based on the four accuracy criteria, the DNN-EHR model generated inaccurate results compared to other methods for LDL-C estimation. By comparing the training and testing datasets, we found there to be an overfitting problem. We revised the LDL-DNN model using the TL algorithms and randomly selected sub-data from the WSCH dataset. As a result, the LDL-DNN-TL model exhibited the best performance among the other methods. CONCLUSIONS The LDL-DNN-TL model is expected to be suitable for routine real-time clinical application for LDL-C estimation in a clinical laboratory.




Author(s):  
Wade L. Schulz ◽  
H. Patrick Young ◽  
Andreas Coppi ◽  
Bobak J. Mortazavi ◽  
Zhenqiu Lin ◽  
...  

Abstract Background The electronic health record (EHR) holds the prospect of providing more complete and timely access to clinical information for biomedical research, quality assessments, and quality improvement compared to other data sources, such as administrative claims. In this study, we sought to assess the completeness and timeliness of structured diagnoses in the EHR compared to computed diagnoses for hypertension (HTN), hyperlipidemia (HLD), and diabetes mellitus (DM). Methods We determined the amount of time for a structured diagnosis to be recorded in the EHR from when an equivalent diagnosis could be computed from other structured data elements, such as vital signs and laboratory results. We used EHR data for encounters from January 1, 2012 through February 10, 2019 from an academic health system. Diagnoses for HTN, HLD, and DM were computed for patients with at least two observations above threshold separated by at least 30 days, where the thresholds were outpatient blood pressure of ≥ 140/90 mmHg, any low-density lipoprotein ≥ 130 mg/dl, or any hemoglobin A1c ≥ 6.5%, respectively. The primary measure was the length of time between the computed diagnosis and the time at which a structured diagnosis could be identified within the EHR history or problem list. Results We found that 39.8% of those with HTN, 21.6% with HLD, and 5.2% with DM did not receive a corresponding structured diagnosis recorded in the EHR. For those who received a structured diagnosis, a mean of 389, 198, and 166 days elapsed before the patient had the corresponding diagnosis of HTN, HLD, or DM, respectively, recorded in the EHR. Conclusions We found a marked temporal delay between when a diagnosis can be computed or inferred and when an equivalent structured diagnosis is recorded within the EHR. These findings demonstrate the continued need for additional study of the EHR to avoid bias when using observational data and reinforce the need for computational approaches to identify clinical phenotypes.



2020 ◽  
Author(s):  
Wade L. Schulz ◽  
H. Patrick Young ◽  
Andreas Coppi ◽  
Bobak J. Mortazavi ◽  
Zhenqiu Lin ◽  
...  

AbstractThe electronic health record (EHR) holds the prospect of providing more complete and timely access to clinical information for studies, quality assessments, and quality improvement compared to other data sources, such as administrative claims. Our goal was to assess the completeness and timeliness of structured diagnoses in the EHR compared to computed diagnoses for hypertension (HTN), hyperlipidemia (HLD), and diabetes mellitus (DM). We determined the amount of time for a structured diagnosis to be recorded in the EHR from when an equivalent diagnosis could be computed from other structured data elements, such as vital signs and laboratory results. Using our local instance of EHR data in the PCORnet common data model (CDM) with encounters from January 1, 2012 through February 10, 2019, we identified patients with at least two observations above threshold separated by at least 30 days. The thresholds were outpatient blood pressure of ≥ 140/90 mmHg, any low-density lipoprotein ≥ 130 mg/dl, or any hemoglobin A1c ≥ 7%, respectively. The primary measure was the length of time between the computed diagnosis and the time at which a structured diagnosis could be identified within the EHR history or problem list. We found that 39.8% of those with HTN, 21.6% with HLD, and 1.0% with DM did not receive a corresponding structured diagnosis recorded in the EHR. For those who received a structured diagnosis, a mean of 389, 198, and 106 days elapsed before the patient had the corresponding diagnosis of HTN, HLD, or DM, respectively, recorded in the EHR. We identified a marked temporal delay between when a diagnosis can be computed or inferred and when an equivalent structured diagnosis is recorded within the EHR. These findings demonstrate the continued need for additional study of the EHR to avoid bias when using observational data and reinforce the need for computational approaches to identify clinical phenotypes.





2020 ◽  
Vol 154 (3) ◽  
pp. 387-393
Author(s):  
Molly E Klein ◽  
Joseph W Rudolf ◽  
Maryna Tarbunova ◽  
Tanya Jorden ◽  
Susanna R Clark ◽  
...  

Abstract Objectives We sought to make pathologists’ intraoperative consultation (IOC) results immediately available to the surgical team, other clinicians, and laboratory medicine colleagues to improve communication and decrease postanalytic errors. Methods We created an IOC report in our stand-alone laboratory information system that could be signed out prior to, and independent of, the final report, and transfer immediately to the electronic health record (EHR) as a preliminary diagnosis. We evaluated two metrics: preliminary (IOC) result review in the EHR by clinicians and postanalytic errors. Results We assessed 2,886 IOC orders from the first 22 months after implementation. Clinicians reviewed 1,956 (68%) of the IOC results while in preliminary status, including 1,399 (48%) within the first 24 hours. We evaluated 150 cases preimplementation and 300 cases postimplementation for discrepancies between the pathologist’s IOC result and the IOC result recorded by the surgeon in the operative note. Discrepancies dropped from 12 of 150 preimplementation to 6 of 150 and 7 of 150 in postimplementation years 1 and 2. One of the 25 discrepancies had a major clinical impact. Conclusions Real-time reporting of IOC results to the EHR reliably transmits results immediately to clinical teams. This strategy reduces but does not eliminate postanalytic interpretive errors by clinical teams.







2018 ◽  
Author(s):  
Rumeng Li ◽  
Baotian Hu ◽  
Feifan Liu ◽  
Weisong Liu ◽  
Francesca Cunningham ◽  
...  

BACKGROUND Bleeding events are common and critical and may cause significant morbidity and mortality. High incidences of bleeding events are associated with cardiovascular disease in patients on anticoagulant therapy. Prompt and accurate detection of bleeding events is essential to prevent serious consequences. As bleeding events are often described in clinical notes, automatic detection of bleeding events from electronic health record (EHR) notes may improve drug-safety surveillance and pharmacovigilance. OBJECTIVE We aimed to develop a natural language processing (NLP) system to automatically classify whether an EHR note sentence contains a bleeding event. METHODS We expert annotated 878 EHR notes (76,577 sentences and 562,630 word-tokens) to identify bleeding events at the sentence level. This annotated corpus was used to train and validate our NLP systems. We developed an innovative hybrid convolutional neural network (CNN) and long short-term memory (LSTM) autoencoder (HCLA) model that integrates a CNN architecture with a bidirectional LSTM (BiLSTM) autoencoder model to leverage large unlabeled EHR data. RESULTS HCLA achieved the best area under the receiver operating characteristic curve (0.957) and F1 score (0.938) to identify whether a sentence contains a bleeding event, thereby surpassing the strong baseline support vector machines and other CNN and autoencoder models. CONCLUSIONS By incorporating a supervised CNN model and a pretrained unsupervised BiLSTM autoencoder, the HCLA achieved high performance in detecting bleeding events.



2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Tanbir Ahmed ◽  
Md Momin Al Aziz ◽  
Noman Mohammed

Abstract According to a recent study, around 99% of hospitals across the US now use electronic health record systems (EHRs). One of the most common types of EHR is the unstructured textual data, and unlocking hidden details from this data is critical for improving current medical practices and research endeavors. However, these textual data contain sensitive information, which could compromise our privacy. Therefore, medical textual data cannot be released publicly without undergoing any privacy-protective measures. De-identification is a process of detecting and removing all sensitive information present in EHRs, and it is a necessary step towards privacy-preserving EHR data sharing. Over the last decade, there have been several proposals to de-identify textual data using manual, rule-based, and machine learning methods. In this article, we propose new methods to de-identify textual data based on the self-attention mechanism and stacked Recurrent Neural Network. To the best of our knowledge, we are the first to employ these techniques. Experimental results on three different datasets show that our model performs better than all state-of-the-art mechanism irrespective of the dataset. Additionally, our proposed method is significantly faster than the existing techniques. Finally, we introduced three utility metrics to judge the quality of the de-identified data.



Sign in / Sign up

Export Citation Format

Share Document