scholarly journals Severity of Disease Estimation and Risk-Adjustment for Comparison of Outcomes in Mechanically Ventilated Patients Using Electronic Routine Care Data

2015 ◽  
Vol 36 (7) ◽  
pp. 807-815 ◽  
Author(s):  
Maaike S. M. van Mourik ◽  
Karel G. M. Moons ◽  
Michael V. Murphy ◽  
Marc J. M. Bonten ◽  
Michael Klompas ◽  
...  

BACKGROUNDValid comparison between hospitals for benchmarking or pay-for-performance incentives requires accurate correction for underlying disease severity (case-mix). However, existing models are either very simplistic or require extensive manual data collection.OBJECTIVETo develop a disease severity prediction model based solely on data routinely available in electronic health records for risk-adjustment in mechanically ventilated patients.DESIGNRetrospective cohort study.PARTICIPANTSMechanically ventilated patients from a single tertiary medical center (2006–2012).METHODSPredictors were extracted from electronic data repositories (demographic characteristics, laboratory tests, medications, microbiology results, procedure codes, and comorbidities) and assessed for feasibility and generalizability of data collection. Models for in-hospital mortality of increasing complexity were built using logistic regression. Estimated disease severity from these models was linked to rates of ventilator-associated events.RESULTSA total of 20,028 patients were initiated on mechanical ventilation, of whom 3,027 deceased in hospital. For models of incremental complexity, area under the receiver operating characteristic curve ranged from 0.83 to 0.88. A simple model including demographic characteristics, type of intensive care unit, time to intubation, blood culture sampling, 8 common laboratory tests, and surgical status achieved an area under the receiver operating characteristic curve of 0.87 (95% CI, 0.86–0.88) with adequate calibration. The estimated disease severity was associated with occurrence of ventilator-associated events.CONCLUSIONSAccurate estimation of disease severity in ventilated patients using electronic, routine care data was feasible using simple models. These estimates may be useful for risk-adjustment in ventilated patients. Additional research is necessary to validate and refine these models.Infect. Control Hosp. Epidemiol. 2015;36(7):807–815

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Francesco Gavelli ◽  
Alexandra Beurton ◽  
Jean-Louis Teboul ◽  
Nello De Vita ◽  
Danila Azzolina ◽  
...  

Abstract Background The end-expiratory occlusion (EEXPO) test detects preload responsiveness, but it is 15 s long and induces small changes in cardiac index (CI). It is doubtful whether the Starling bioreactance device, which averages CI over 24 s and refreshes the displayed value every 4 s (Starling-24.4), can detect the EEXPO-induced changes in CI (ΔCI). Our primary goal was to test whether this Starling device version detects preload responsiveness through EEXPO. We also tested whether shortening the averaging and refresh times to 8 s and one second, respectively, (Starling-8.1) improves the accuracy of the device in detecting preload responsiveness using EEXPO. Methods In 42 mechanically ventilated patients, during a 15-s EEXPO, we measured ∆CI through calibrated pulse contour analysis (CIpulse, PiCCO2 device) and using the Starling device. For the latter, we considered both CIStarling-24.4 from the commercial version and CIStarling-8.1 derived from the raw data. For relative ∆CIStarling-24.4 and ∆CIStarling-8.1 during EEXPO, we calculated the area under the receiver operating characteristic curve (AUROC) to detect preload responsiveness, defined as an increase in CIpulse ≥ 10% during passive leg raising (PLR). For both methods, the correlation coefficient vs. ∆CIpulse was calculated. Results Twenty-six patients were preload responders and sixteen non preload-responders. The AUROC for ∆CIStarling-24.4 was significantly lower compared to ∆CIStarling-8.1 (0.680 ± 0.086 vs. 0.899 ± 0.049, respectively; p = 0.027). A significant correlation was observed between ∆CIStarling-8.1 and ∆CIpulse (r = 0.42; p = 0.009), but not between ∆CIStarling-24.4 and ∆CIpulse. During PLR, both ∆CIStarling-24.4 and ∆CIStarling-8.1 reliably detected preload responsiveness. Conclusions Shortening the averaging and refresh times of the bioreactance signal to 8 s and one second, respectively, increases the reliability of the Starling device in detection of EEXPO-induced ∆CI. Trial registration: No. IDRCB:2018-A02825-50. Registered 13 December 2018.


10.2196/24163 ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. e24163
Author(s):  
Md Mohaimenul Islam ◽  
Hsuan-Chia Yang ◽  
Tahmina Nasrin Poly ◽  
Yu-Chuan Jack Li

Background Laboratory tests are considered an essential part of patient safety as patients’ screening, diagnosis, and follow-up are solely based on laboratory tests. Diagnosis of patients could be wrong, missed, or delayed if laboratory tests are performed erroneously. However, recognizing the value of correct laboratory test ordering remains underestimated by policymakers and clinicians. Nowadays, artificial intelligence methods such as machine learning and deep learning (DL) have been extensively used as powerful tools for pattern recognition in large data sets. Therefore, developing an automated laboratory test recommendation tool using available data from electronic health records (EHRs) could support current clinical practice. Objective The objective of this study was to develop an artificial intelligence–based automated model that can provide laboratory tests recommendation based on simple variables available in EHRs. Methods A retrospective analysis of the National Health Insurance database between January 1, 2013, and December 31, 2013, was performed. We reviewed the record of all patients who visited the cardiology department at least once and were prescribed laboratory tests. The data set was split into training and testing sets (80:20) to develop the DL model. In the internal validation, 25% of data were randomly selected from the training set to evaluate the performance of this model. Results We used the area under the receiver operating characteristic curve, precision, recall, and hamming loss as comparative measures. A total of 129,938 prescriptions were used in our model. The DL-based automated recommendation system for laboratory tests achieved a significantly higher area under the receiver operating characteristic curve (AUROCmacro and AUROCmicro of 0.76 and 0.87, respectively). Using a low cutoff, the model identified appropriate laboratory tests with 99% sensitivity. Conclusions The developed artificial intelligence model based on DL exhibited good discriminative capability for predicting laboratory tests using routinely collected EHR data. Utilization of DL approaches can facilitate optimal laboratory test selection for patients, which may in turn improve patient safety. However, future study is recommended to assess the cost-effectiveness for implementing this model in real-world clinical settings.


Perfusion ◽  
2020 ◽  
Vol 35 (8) ◽  
pp. 802-805
Author(s):  
Hari Krishnan Kanthimathinathan ◽  
Sarah Webb ◽  
David Ellis ◽  
Margaret Farley ◽  
Timothy J Jones

Introduction: There is a need for a universal risk-adjustment model that may be used regardless of the indication and nature of neonatal or paediatric extracorporeal membrane oxygenation support. The ‘paediatric extracorporeal membrane oxygenation prediction’ model appeared to be a promising candidate but required external validation. Methods: We performed a validation study using institutional database of extracorporeal membrane oxygenation patients (2008-2019). We used the published paediatric extracorporeal membrane oxygenation prediction score calculator to derive estimated mortality based on the model in this cohort of patients in our institutional database. We used standardized mortality ratio, area under the receiver operating characteristic curve and Hosmer-Lemeshow goodness-of-fit test in 10 deciles to assess model performance. Results: We analysed 154 extracorporeal membrane oxygenation episodes in 150 patients. About 53% of the patients were full term (age ⩽30 days and gestation at birth ⩾37 weeks) neonates. The commonest category of extracorporeal membrane oxygenation support was cardiac (42%). The overall in-paediatric intensive care unit mortality was 37% (57/154) and the in-hospital mortality was 42% (64/154). Distribution of estimated mortality risk was similar to the derivation study. The calculated standardized mortality ratio was 0.81 based on the paediatric extracorporeal membrane oxygenation prediction model of risk-adjustment. The area under the receiver operating characteristic curve was 0.55 (0.45-0.64) and Hosmer-Lemeshow-test p value <0.001 was unable to support goodness-of-fit. Conclusion: This small single-centre study with a small number of events was unable to validate the paediatric extracorporeal membrane oxygenation prediction-model of risk-adjustment. Although this remains the most promising of all the available models, further validation in larger data sets and/or refinement may be required before widespread use.


2020 ◽  
Author(s):  
Md Mohaimenul Islam ◽  
Hsuan-Chia Yang ◽  
Tahmina Nasrin Poly ◽  
Yu-Chuan Jack Li

BACKGROUND Laboratory tests are considered an essential part of patient safety as patients’ screening, diagnosis, and follow-up are solely based on laboratory tests. Diagnosis of patients could be wrong, missed, or delayed if laboratory tests are performed erroneously. However, recognizing the value of correct laboratory test ordering remains underestimated by policymakers and clinicians. Nowadays, artificial intelligence methods such as machine learning and deep learning (DL) have been extensively used as powerful tools for pattern recognition in large data sets. Therefore, developing an automated laboratory test recommendation tool using available data from electronic health records (EHRs) could support current clinical practice. OBJECTIVE The objective of this study was to develop an artificial intelligence–based automated model that can provide laboratory tests recommendation based on simple variables available in EHRs. METHODS A retrospective analysis of the National Health Insurance database between January 1, 2013, and December 31, 2013, was performed. We reviewed the record of all patients who visited the cardiology department at least once and were prescribed laboratory tests. The data set was split into training and testing sets (80:20) to develop the DL model. In the internal validation, 25% of data were randomly selected from the training set to evaluate the performance of this model. RESULTS We used the area under the receiver operating characteristic curve, precision, recall, and hamming loss as comparative measures. A total of 129,938 prescriptions were used in our model. The DL-based automated recommendation system for laboratory tests achieved a significantly higher area under the receiver operating characteristic curve (AUROCmacro and AUROCmicro of 0.76 and 0.87, respectively). Using a low cutoff, the model identified appropriate laboratory tests with 99% sensitivity. CONCLUSIONS The developed artificial intelligence model based on DL exhibited good discriminative capability for predicting laboratory tests using routinely collected EHR data. Utilization of DL approaches can facilitate optimal laboratory test selection for patients, which may in turn improve patient safety. However, future study is recommended to assess the cost-effectiveness for implementing this model in real-world clinical settings.


Author(s):  
Ramnath Subbaraman ◽  
Beena E Thomas ◽  
J Vignesh Kumar ◽  
Maya Lubeck-Schricker ◽  
Amit Khandewale ◽  
...  

Abstract Background Nonadherence to tuberculosis medications is associated with poor outcomes. However, measuring adherence in practice is challenging. In this study, we evaluated the accuracy of multiple tuberculosis adherence measures. Methods We enrolled adult Indians with drug-susceptible tuberculosis who were monitored using 99DOTS, a cellphone-based technology. During an unannounced home visit with each participant, we assessed adherence using a pill estimate, four-day dose recall, a last missed dose question, and urine isoniazid metabolite testing. We estimated the area under the receiver operating characteristic curve (AUC) for each alternate measure in comparison to urine testing. 99DOTS data were analyzed using patient-reported doses alone and patient- and provider-reported doses, the latter reflecting how 99DOTS is implemented in practice. We assessed each measure’s operating characteristics, with particular interest in specificity—i.e., the percentage of participants detected as being nonadherent by each alternate measure, among those who were nonadherent by urine testing. Results Compared to urine testing, alternate measures had the following characteristics: 99DOTS patient-reported doses alone (AUC 0.65, specificity 70%, 95%CI:58—81%), 99DOTS patient- and provider-reported doses (AUC 0.61, specificity 33%, 95%CI: 22—45%), pill estimate (AUC 0.55, specificity 21%, 95%CI:12—32%), four-day recall (AUC 0.60, specificity 23%, 95%CI: 14—34%), and last missed dose question (AUC 0.65, specificity 52%, 95%CI:40—63%). Conclusions Alternate measures missed detecting at least 30% of people who were nonadherent by urine testing. The last missed dose question performed similarly to 99DOTS using patient-reported doses alone. Tuberculosis programs should evaluate the feasibility of integrating more accurate, objective measures, such as urine testing, into routine care.


Sign in / Sign up

Export Citation Format

Share Document