scholarly journals Patient generated health data and electronic health record integration in oncologic surgery: A call for artificial intelligence and machine learning

2020 ◽  
Vol 123 (1) ◽  
pp. 52-60
Author(s):  
Laleh G. Melstrom ◽  
Andrei S. Rodin ◽  
Lorenzo A. Rossi ◽  
Paul Fu ◽  
Yuman Fong ◽  
...  
Author(s):  
Jeffrey G Klann ◽  
Griffin M Weber ◽  
Hossein Estiri ◽  
Bertrand Moal ◽  
Paul Avillach ◽  
...  

Abstract Introduction The Consortium for Clinical Characterization of COVID-19 by EHR (4CE) is an international collaboration addressing COVID-19 with federated analyses of electronic health record (EHR) data. Objective We sought to develop and validate a computable phenotype for COVID-19 severity. Methods Twelve 4CE sites participated. First we developed an EHR-based severity phenotype consisting of six code classes, and we validated it on patient hospitalization data from the 12 4CE clinical sites against the outcomes of ICU admission and/or death. We also piloted an alternative machine-learning approach and compared selected predictors of severity to the 4CE phenotype at one site. Results The full 4CE severity phenotype had pooled sensitivity of 0.73 and specificity 0.83 for the combined outcome of ICU admission and/or death. The sensitivity of individual code categories for acuity had high variability - up to 0.65 across sites. At one pilot site, the expert-derived phenotype had mean AUC 0.903 (95% CI: 0.886, 0.921), compared to AUC 0.956 (95% CI: 0.952, 0.959) for the machine-learning approach. Billing codes were poor proxies of ICU admission, with as low as 49% precision and recall compared to chart review. Discussion We developed a severity phenotype using 6 code classes that proved resilient to coding variability across international institutions. In contrast, machine-learning approaches may overfit hospital-specific orders. Manual chart review revealed discrepancies even in the gold-standard outcomes, possibly due to heterogeneous pandemic conditions. Conclusion We developed an EHR-based severity phenotype for COVID-19 in hospitalized patients and validated it at 12 international sites.


2021 ◽  
Author(s):  
Kaio Bin ◽  
Adler Araújo Ribeiro Melo ◽  
José Guilherme Franco Da Rocha ◽  
Renata Pivi De Almeida ◽  
Vilson Cobello Junior ◽  
...  

BACKGROUND AIRA is an AI designed to reduce the time that doctors dedicate filling out EHR, winner of the first edition of MIT Hacking Medicine held in Brazil in 2020. As a proof of concept, AIRA was implemented in administrative process before its application in a medical process. OBJECTIVE The aim of the study is to determinate the impact of AIRA by eliminating the Medical Care Registration (MCR) on Electronic Health Record (EHR) by Administrative Officer. METHODS This is a comparative before-and-after study following the guidance “Evaluating digital health products” from Public Health England. An Artificial Intelligence named AIRA was created and implemented at CEAC (Employee Attention Center) from HCFMUSP. A total of 25,507 attendances were evaluated along 2020 for determinate AIRA´s impact. Total of MCR, time of health screening and time between the end of the screening and the beginning of medical care, were compared in the pre and post AIRA periods. RESULTS AIRA eliminated the need for Medical Care Registration by Administrative Officer in 92% (p<0.0001). The nurse´s time of health screening increased 16% (p<0.0001) during the implementation, and 13% (p<0.0001) until three months after the implementation, but reduced in 4% three months after implementation (p<0.0001). The mean and median total time to Medical Care after the nurse’ Screening was decreased in 30% (p<0.0001) and 41% (p<0.0001) respectively. CONCLUSIONS The implementation of AIRA reduced the time to medical care in an urgent care after the nurse´ screening, by eliminating non-value-added activity the Medical Care Registration on Electronic Health Record (EHR) by Administrative Officer.


Author(s):  
Emily Kogan ◽  
Kathryn Twyman ◽  
Jesse Heap ◽  
Dejan Milentijevic ◽  
Jennifer H. Lin ◽  
...  

Abstract Background Stroke severity is an important predictor of patient outcomes and is commonly measured with the National Institutes of Health Stroke Scale (NIHSS) scores. Because these scores are often recorded as free text in physician reports, structured real-world evidence databases seldom include the severity. The aim of this study was to use machine learning models to impute NIHSS scores for all patients with newly diagnosed stroke from multi-institution electronic health record (EHR) data. Methods NIHSS scores available in the Optum© de-identified Integrated Claims-Clinical dataset were extracted from physician notes by applying natural language processing (NLP) methods. The cohort analyzed in the study consists of the 7149 patients with an inpatient or emergency room diagnosis of ischemic stroke, hemorrhagic stroke, or transient ischemic attack and a corresponding NLP-extracted NIHSS score. A subset of these patients (n = 1033, 14%) were held out for independent validation of model performance and the remaining patients (n = 6116, 86%) were used for training the model. Several machine learning models were evaluated, and parameters optimized using cross-validation on the training set. The model with optimal performance, a random forest model, was ultimately evaluated on the holdout set. Results Leveraging machine learning we identified the main factors in electronic health record data for assessing stroke severity, including death within the same month as stroke occurrence, length of hospital stay following stroke occurrence, aphagia/dysphagia diagnosis, hemiplegia diagnosis, and whether a patient was discharged to home or self-care. Comparing the imputed NIHSS scores to the NLP-extracted NIHSS scores on the holdout data set yielded an R2 (coefficient of determination) of 0.57, an R (Pearson correlation coefficient) of 0.76, and a root-mean-squared error of 4.5. Conclusions Machine learning models built on EHR data can be used to determine proxies for stroke severity. This enables severity to be incorporated in studies of stroke patient outcomes using administrative and EHR databases.


2019 ◽  
Vol 6 (10) ◽  
pp. e688-e695 ◽  
Author(s):  
Julia L Marcus ◽  
Leo B Hurley ◽  
Douglas S Krakower ◽  
Stacey Alexeeff ◽  
Michael J Silverberg ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document