scholarly journals Developing a Predictive Model for Asthma-Related Hospital Encounters in Patients With Asthma in a Large, Integrated Health Care System: Secondary Analysis

10.2196/22689 ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. e22689
Author(s):  
Gang Luo ◽  
Claudia L Nau ◽  
William W Crawford ◽  
Michael Schatz ◽  
Robert S Zeiger ◽  
...  

Background Asthma causes numerous hospital encounters annually, including emergency department visits and hospitalizations. To improve patient outcomes and reduce the number of these encounters, predictive models are widely used to prospectively pinpoint high-risk patients with asthma for preventive care via care management. However, previous models do not have adequate accuracy to achieve this goal well. Adopting the modeling guideline for checking extensive candidate features, we recently constructed a machine learning model on Intermountain Healthcare data to predict asthma-related hospital encounters in patients with asthma. Although this model is more accurate than the previous models, whether our modeling guideline is generalizable to other health care systems remains unknown. Objective This study aims to assess the generalizability of our modeling guideline to Kaiser Permanente Southern California (KPSC). Methods The patient cohort included a random sample of 70.00% (397,858/568,369) of patients with asthma who were enrolled in a KPSC health plan for any duration between 2015 and 2018. We produced a machine learning model via a secondary analysis of 987,506 KPSC data instances from 2012 to 2017 and by checking 337 candidate features to project asthma-related hospital encounters in the following 12-month period in patients with asthma. Results Our model reached an area under the receiver operating characteristic curve of 0.820. When the cutoff point for binary classification was placed at the top 10.00% (20,474/204,744) of patients with asthma having the largest predicted risk, our model achieved an accuracy of 90.08% (184,435/204,744), a sensitivity of 51.90% (2259/4353), and a specificity of 90.91% (182,176/200,391). Conclusions Our modeling guideline exhibited acceptable generalizability to KPSC and resulted in a model that is more accurate than those formerly built by others. After further enhancement, our model could be used to guide asthma care management. International Registered Report Identifier (IRRID) RR2-10.2196/resprot.5039

2020 ◽  
Author(s):  
Gang Luo ◽  
Claudia L Nau ◽  
William W Crawford ◽  
Michael Schatz ◽  
Robert S Zeiger ◽  
...  

BACKGROUND Asthma causes numerous hospital encounters annually, including emergency department visits and hospitalizations. To improve patient outcomes and reduce the number of these encounters, predictive models are widely used to prospectively pinpoint high-risk patients with asthma for preventive care via care management. However, previous models do not have adequate accuracy to achieve this goal well. Adopting the modeling guideline for checking extensive candidate features, we recently constructed a machine learning model on Intermountain Healthcare data to predict asthma-related hospital encounters in patients with asthma. Although this model is more accurate than the previous models, whether our modeling guideline is generalizable to other health care systems remains unknown. OBJECTIVE This study aims to assess the generalizability of our modeling guideline to Kaiser Permanente Southern California (KPSC). METHODS The patient cohort included a random sample of 70.00% (397,858/568,369) of patients with asthma who were enrolled in a KPSC health plan for any duration between 2015 and 2018. We produced a machine learning model via a secondary analysis of 987,506 KPSC data instances from 2012 to 2017 and by checking 337 candidate features to project asthma-related hospital encounters in the following 12-month period in patients with asthma. RESULTS Our model reached an area under the receiver operating characteristic curve of 0.820. When the cutoff point for binary classification was placed at the top 10.00% (20,474/204,744) of patients with asthma having the largest predicted risk, our model achieved an accuracy of 90.08% (184,435/204,744), a sensitivity of 51.90% (2259/4353), and a specificity of 90.91% (182,176/200,391). CONCLUSIONS Our modeling guideline exhibited acceptable generalizability to KPSC and resulted in a model that is more accurate than those formerly built by others. After further enhancement, our model could be used to guide asthma care management. INTERNATIONAL REGISTERED REPORT RR2-10.2196/resprot.5039


2020 ◽  
Author(s):  
Gang Luo ◽  
Michael D Johnson ◽  
Flory L Nkoy ◽  
Shan He ◽  
Bryan L Stone

BACKGROUND Asthma is a major chronic disease that poses a heavy burden on health care. To facilitate the allocation of care management resources aimed at improving outcomes for high-risk patients with asthma, we recently built a machine learning model to predict asthma hospital visits in the subsequent year in patients with asthma. Our model is more accurate than previous models. However, like most machine learning models, it offers no explanation of its prediction results. This creates a barrier for use in care management, where interpretability is desired. OBJECTIVE This study aims to develop a method to automatically explain the prediction results of the model and recommend tailored interventions without lowering the performance measures of the model. METHODS Our data were imbalanced, with only a small portion of data instances linking to future asthma hospital visits. To handle imbalanced data, we extended our previous method of automatically offering rule-formed explanations for the prediction results of any machine learning model on tabular data without lowering the model’s performance measures. In a secondary analysis of the 334,564 data instances from Intermountain Healthcare between 2005 and 2018 used to form our model, we employed the extended method to automatically explain the prediction results of our model and recommend tailored interventions. The patient cohort consisted of all patients with asthma who received care at Intermountain Healthcare between 2005 and 2018, and resided in Utah or Idaho as recorded at the visit. RESULTS Our method explained the prediction results for 89.7% (391/436) of the patients with asthma who, per our model’s correct prediction, were likely to incur asthma hospital visits in the subsequent year. CONCLUSIONS This study is the first to demonstrate the feasibility of automatically offering rule-formed explanations for the prediction results of any machine learning model on imbalanced tabular data without lowering the performance measures of the model. After further improvement, our asthma outcome prediction model coupled with the automatic explanation function could be used by clinicians to guide the allocation of limited asthma care management resources and the identification of appropriate interventions.


10.2196/21965 ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. e21965
Author(s):  
Gang Luo ◽  
Michael D Johnson ◽  
Flory L Nkoy ◽  
Shan He ◽  
Bryan L Stone

Background Asthma is a major chronic disease that poses a heavy burden on health care. To facilitate the allocation of care management resources aimed at improving outcomes for high-risk patients with asthma, we recently built a machine learning model to predict asthma hospital visits in the subsequent year in patients with asthma. Our model is more accurate than previous models. However, like most machine learning models, it offers no explanation of its prediction results. This creates a barrier for use in care management, where interpretability is desired. Objective This study aims to develop a method to automatically explain the prediction results of the model and recommend tailored interventions without lowering the performance measures of the model. Methods Our data were imbalanced, with only a small portion of data instances linking to future asthma hospital visits. To handle imbalanced data, we extended our previous method of automatically offering rule-formed explanations for the prediction results of any machine learning model on tabular data without lowering the model’s performance measures. In a secondary analysis of the 334,564 data instances from Intermountain Healthcare between 2005 and 2018 used to form our model, we employed the extended method to automatically explain the prediction results of our model and recommend tailored interventions. The patient cohort consisted of all patients with asthma who received care at Intermountain Healthcare between 2005 and 2018, and resided in Utah or Idaho as recorded at the visit. Results Our method explained the prediction results for 89.7% (391/436) of the patients with asthma who, per our model’s correct prediction, were likely to incur asthma hospital visits in the subsequent year. Conclusions This study is the first to demonstrate the feasibility of automatically offering rule-formed explanations for the prediction results of any machine learning model on imbalanced tabular data without lowering the performance measures of the model. After further improvement, our asthma outcome prediction model coupled with the automatic explanation function could be used by clinicians to guide the allocation of limited asthma care management resources and the identification of appropriate interventions.


10.2196/24153 ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. e24153
Author(s):  
Gang Luo ◽  
Claudia L Nau ◽  
William W Crawford ◽  
Michael Schatz ◽  
Robert S Zeiger ◽  
...  

Background Asthma exerts a substantial burden on patients and health care systems. To facilitate preventive care for asthma management and improve patient outcomes, we recently developed two machine learning models, one on Intermountain Healthcare data and the other on Kaiser Permanente Southern California (KPSC) data, to forecast asthma-related hospital visits, including emergency department visits and hospitalizations, in the succeeding 12 months among patients with asthma. As is typical for machine learning approaches, these two models do not explain their forecasting results. To address the interpretability issue of black-box models, we designed an automatic method to offer rule format explanations for the forecasting results of any machine learning model on imbalanced tabular data and to suggest customized interventions with no accuracy loss. Our method worked well for explaining the forecasting results of our Intermountain Healthcare model, but its generalizability to other health care systems remains unknown. Objective The objective of this study is to evaluate the generalizability of our automatic explanation method to KPSC for forecasting asthma-related hospital visits. Methods Through a secondary analysis of 987,506 data instances from 2012 to 2017 at KPSC, we used our method to explain the forecasting results of our KPSC model and to suggest customized interventions. The patient cohort covered a random sample of 70% of patients with asthma who had a KPSC health plan for any period between 2015 and 2018. Results Our method explained the forecasting results for 97.57% (2204/2259) of the patients with asthma who were correctly forecasted to undergo asthma-related hospital visits in the succeeding 12 months. Conclusions For forecasting asthma-related hospital visits, our automatic explanation method exhibited an acceptable generalizability to KPSC. International Registered Report Identifier (IRRID) RR2-10.2196/resprot.5039


2020 ◽  
Author(s):  
Gang Luo ◽  
Claudia L Nau ◽  
William W Crawford ◽  
Michael Schatz ◽  
Robert S Zeiger ◽  
...  

BACKGROUND Asthma exerts a substantial burden on patients and health care systems. To facilitate preventive care for asthma management and improve patient outcomes, we recently developed two machine learning models, one on Intermountain Healthcare data and the other on Kaiser Permanente Southern California (KPSC) data, to forecast asthma-related hospital visits, including emergency department visits and hospitalizations, in the succeeding 12 months among patients with asthma. As is typical for machine learning approaches, these two models do not explain their forecasting results. To address the interpretability issue of black-box models, we designed an automatic method to offer rule format explanations for the forecasting results of any machine learning model on imbalanced tabular data and to suggest customized interventions with no accuracy loss. Our method worked well for explaining the forecasting results of our Intermountain Healthcare model, but its generalizability to other health care systems remains unknown. OBJECTIVE The objective of this study is to evaluate the generalizability of our automatic explanation method to KPSC for forecasting asthma-related hospital visits. METHODS Through a secondary analysis of 987,506 data instances from 2012 to 2017 at KPSC, we used our method to explain the forecasting results of our KPSC model and to suggest customized interventions. The patient cohort covered a random sample of 70% of patients with asthma who had a KPSC health plan for any period between 2015 and 2018. RESULTS Our method explained the forecasting results for 97.57% (2204/2259) of the patients with asthma who were correctly forecasted to undergo asthma-related hospital visits in the succeeding 12 months. CONCLUSIONS For forecasting asthma-related hospital visits, our automatic explanation method exhibited an acceptable generalizability to KPSC. CLINICALTRIAL INTERNATIONAL REGISTERED REPORT RR2-10.2196/resprot.5039


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0240200
Author(s):  
Miguel Marcos ◽  
Moncef Belhassen-García ◽  
Antonio Sánchez-Puente ◽  
Jesús Sampedro-Gomez ◽  
Raúl Azibeiro ◽  
...  

Background Efficient and early triage of hospitalized Covid-19 patients to detect those with higher risk of severe disease is essential for appropriate case management. Methods We trained, validated, and externally tested a machine-learning model to early identify patients who will die or require mechanical ventilation during hospitalization from clinical and laboratory features obtained at admission. A development cohort with 918 Covid-19 patients was used for training and internal validation, and 352 patients from another hospital were used for external testing. Performance of the model was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity and specificity. Results A total of 363 of 918 (39.5%) and 128 of 352 (36.4%) Covid-19 patients from the development and external testing cohort, respectively, required mechanical ventilation or died during hospitalization. In the development cohort, the model obtained an AUC of 0.85 (95% confidence interval [CI], 0.82 to 0.87) for predicting severity of disease progression. Variables ranked according to their contribution to the model were the peripheral blood oxygen saturation (SpO2)/fraction of inspired oxygen (FiO2) ratio, age, estimated glomerular filtration rate, procalcitonin, C-reactive protein, updated Charlson comorbidity index and lymphocytes. In the external testing cohort, the model performed an AUC of 0.83 (95% CI, 0.81 to 0.85). This model is deployed in an open source calculator, in which Covid-19 patients at admission are individually stratified as being at high or non-high risk for severe disease progression. Conclusions This machine-learning model, applied at hospital admission, predicts risk of severe disease progression in Covid-19 patients.


Author(s):  
Winston T Wang ◽  
Charlotte L Zhang ◽  
Kang Wei ◽  
Ye Sang ◽  
Jun Shen ◽  
...  

Abstract Within COVID-19 there is an urgent unmet need to predict at the time of hospital admission which patients will recover from the disease, and how fast they recover in order to deliver personalized treatments and to properly allocate hospital resources so that healthcare systems do not become overwhelmed. To this end we have combined clinically salient CT imaging data synergistically with laboratory testing data in an integrative machine learning model to predict organ-specific recovery of patients from COVID-19. We trained and validated our model in 285 patients on each separate major organ system impacted by COVID-19 including the renal, pulmonary, immune, cardiac, and hepatic systems. To greatly enhance the speed and utility of our model, we applied an artificial intelligence method to segment and classify regions on CT imaging, from which interpretable data could be directly fed into the predictive machine learning model for overall recovery. Across all organ systems we achieved validation set area under the receiver operator characteristic curve (AUC) values for organ-specific recovery ranging from 0.80 to 0.89, and significant overall recovery prediction in Kaplan-Meier analyses. This demonstrates that the synergistic use of an AI framework applied to CT lung imaging and a machine learning model that integrates laboratory test data with imaging data can accurately predict the overall recovery of COVID-19 patients from baseline characteristics.


Author(s):  
Miguel Marcos ◽  
Moncef Belhassen-Garcia ◽  
Antonio Sanchez- Puente ◽  
Jesus Sampedro-Gomez ◽  
Raul Azibeiro ◽  
...  

BACKGROUND: Efficient and early triage of hospitalized Covid-19 patients to detect those with higher risk of severe disease is essential for appropriate case management. METHODS: We trained, validated, and externally tested a machine-learning model to early identify patients who will die or require mechanical ventilation during hospitalization from clinical and laboratory features obtained at admission. A development cohort with 918 Covid-19 patients was used for training and internal validation, and 352 patients from another hospital were used for external testing. Performance of the model was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity and specificity. RESULTS: A total of 363 of 918 (39.5%) and 128 of 352 (36.4%) Covid-19 patients from the development and external testing cohort, respectively, required mechanical ventilation or died during hospitalization. In the development cohort, the model obtained an AUC of 0.85 (95% confidence interval [CI], 0.82 to 0.87) for predicting severity of disease progression. Variables ranked according to their contribution to the model were the peripheral blood oxygen saturation (SpO2)/fraction of inspired oxygen (FiO2) ratio, age, estimated glomerular filtration rate, procalcitonin, C-reactive protein, updated Charlson comorbidity index and lymphocytes. In the external testing cohort, the model performed an AUC of 0.83 (95% CI, 0.81 to 0.85). This model is deployed in an open source calculator, in which Covid-19 patients at admission are individually stratified as being at high or non-high risk for severe disease progression. CONCLUSIONS: This machine-learning model, applied at hospital admission, predicts risk of severe disease progression in Covid-19 patients.


Sign in / Sign up

Export Citation Format

Share Document