scholarly journals Pediatric Severe Sepsis Prediction Using Machine Learning

2017 ◽  
Author(s):  
Thomas Desautels ◽  
Jana Hoffman ◽  
Christopher Barton ◽  
Qingqing Mao ◽  
Melissa Jay ◽  
...  

Early detection of pediatric severe sepsis is necessary in order to administer effective treatment. In this study, we assessed the efficacy of a machine-learning-based prediction algorithm applied to electronic healthcare record (EHR) data for the prediction of severe sepsis onset. The resulting prediction performance was compared with the Pediatric Logistic Organ Dysfunction score (PELOD-2) and pediatric Systemic Inflammatory Response Syndrome score (SIRS) using cross-validation and pairwise t-tests. EHR data were collected from a retrospective set of de-identified pediatric inpatient and emergency encounters drawn from the University of California San Francisco (UCSF) Medical Center, with encounter dates between June 2011 and March 2016. Patients (n = 11,127) were 2-17 years of age and 103 [0.93%] were labeled severely septic. In four-fold cross-validation evaluations, the machine learning algorithm achieved an AUROC of 0.912 for discrimination between severely septic and control pediatric patients at onset and AUROC of 0.727 four hours before onset. Under the same measure, the prediction algorithm also significantly outperformed PELOD-2 (p < 0.05) and SIRS (p < 0.05) in the prediction of severe sepsis four hours before onset. This machine learning algorithm has the potential to deliver high-performance severe sepsis detection and prediction for pediatric inpatients.

2018 ◽  
Author(s):  
Qingqing Mao ◽  
Melissa Jay ◽  
Jana Hoffman ◽  
Jacob Calvert ◽  
Christopher Barton ◽  
...  

Objectives: We validate a machine learning-based sepsis prediction algorithm (InSight) for detection and prediction of three sepsis-related gold standards, using only six vital signs. We evaluate robustness to missing data, customization to site-specific data using transfer learning, and generalizability to new settings. Design: A machine learning algorithm with gradient tree boosting. Features for prediction were created from combinations of only six vital sign measurements and their changes over time. Setting: A mixed-ward retrospective data set from the University of California, San Francisco (UCSF) Medical Center (San Francisco, CA) as the primary source, an intensive care unit data set from the Beth Israel Deaconess Medical Center (Boston, MA) as a transfer learning source, and four additional institutions' datasets to evaluate generalizability. Participants: 684,443 total encounters, with 90,353 encounters from June 2011 to March 2016 at UCSF. Interventions: none Primary and secondary outcome measures: Area under the receiver operating characteristic curve (AUROC) for detection and prediction of sepsis, severe sepsis, and septic shock. Results: For detection of sepsis and severe sepsis, InSight achieves an area under the receiver operating characteristic (AUROC) curve of 0.92 (95% CI 0.90 - 0.93) and 0.87 (95% CI 0.86 - 0.88), respectively. Four hours before onset, InSight predicts septic shock with an AUROC of 0.96 (95% CI 0.94 - 0.98), and severe sepsis with an AUROC of 0.85 (95% CI 0.79 - 0.91). Conclusions: InSight outperforms existing sepsis scoring systems in identifying and predicting sepsis, severe sepsis, and septic shock. This is the first sepsis screening system to exceed an AUROC of 0.90 using only vital sign inputs. InSight is robust to missing data, can be customized to novel hospital data using a small fraction of site data, and retained strong discrimination across all institutions.


2021 ◽  
Author(s):  
Inger Persson ◽  
Andreas Östling ◽  
Martin Arlbrandt ◽  
Joakim Söderberg ◽  
David Becedas

BACKGROUND Despite decades of research, sepsis remains a leading cause of mortality and morbidity in ICUs worldwide. The key to effective management and patient outcome is early detection, where no prospectively validated machine learning prediction algorithm is available for clinical use in Europe today. OBJECTIVE To develop a high-performance machine learning sepsis prediction algorithm based on routinely collected ICU data, designed to be implemented in Europe. METHODS The machine learning algorithm is developed using Convolutional Neural Network, based on the Massachusetts Institute of Technology Lab for Computational Physiology MIMIC-III Clinical Database, focusing on ICU patients aged 18 years or older. Twenty variables are used for prediction, on an hourly basis. Onset of sepsis is defined in accordance with the international Sepsis-3 criteria. RESULTS The developed algorithm NAVOY Sepsis uses 4 hours of input and can with high accuracy predict patients with high risk of developing sepsis in the coming hours. The prediction performance is superior to that of existing sepsis early warning scoring systems, and competes well with previously published prediction algorithms designed to predict sepsis onset in accordance with the Sepsis-3 criteria, as measured by the area under the receiver operating characteristics curve (AUROC) and the area under the precision-recall curve (AUPRC). NAVOY Sepsis yields AUROC = 0.90 and AUPRC = 0.62 for predictions up to 3 hours before sepsis onset. The predictive performance is externally validated on hold-out test data, where NAVOY Sepsis is confirmed to predict sepsis with high accuracy. CONCLUSIONS An algorithm with excellent predictive properties has been developed, based on variables routinely collected at ICUs. This algorithm is to be further validated in an ongoing prospective randomized clinical trial and will be CE marked as Software as a Medical Device, designed for commercial use in European ICUs.


2017 ◽  
Author(s):  
Hamid Mohamadlou ◽  
Anna Lynn-Palevsky ◽  
Christopher Barton ◽  
Uli Chettipally ◽  
Lisa Shieh ◽  
...  

AbstractBackgroundA major problem in treating acute kidney injury (AKI) is that clinical criteria for recognition are markers of established kidney damage or impaired function; treatment before such damage manifests is desirable. Clinicians could intervene during what may be a crucial stage for preventing permanent kidney injury if patients with incipient AKI and those at high risk of developing AKI could be identified.MethodsWe used a machine learning technique, boosted ensembles of decision trees, to train an AKI prediction tool on retrospective data from inpatients at Stanford Medical Center and intensive care unit patients at Beth Israel Deaconess Medical Center. We tested the algorithm’s ability to detect AKI at onset, and to predict AKI 12, 24, 48, and 72 hours before onset, and compared its 3-fold cross-validation performance to the SOFA score for AKI identification in terms of Area Under the Receiver Operating Characteristic (AUROC).ResultsThe prediction algorithm achieves AUROC of 0.872 (95% CI 0.867, 0.878) for AKI onset detection, superior to the SOFA score AUROC of 0.815 (P < 0.01). At 72 hours before onset, the algorithm achieves AUROC of 0.728 (95% CI 0.719, 0.737), compared to the SOFA score AUROC of 0.720 (P < 0.01).ConclusionsThe results of these experiments suggest that a machine-learning-based AKI prediction tool may offer important prognostic capabilities for determining which patients are likely to suffer AKI, potentially allowing clinicians to intervene before kidney damage manifests.


2009 ◽  
Vol 21 (4) ◽  
pp. 498-506 ◽  
Author(s):  
Sho Murakami ◽  
◽  
Takuo Suzuki ◽  
Akira Tokumasu ◽  
Yasushi Nakauchi

This paper proposes cooking support using ubiquitous sensors. We developed a machine learning algorithm that recognizes cooking procedures by taking into account widely varying sensor information and user behavior. To provide appropriate instructions to users, we developed a Markov-model-based behavior prediction algorithm. Using these algorithms, we developed cooking support automatically displaying cooking instruction videos based on user progress. Experiments and experimental results confirmed the feasibility of our proposed cooking support.


2020 ◽  
Vol 27 (1) ◽  
pp. e100109 ◽  
Author(s):  
Hoyt Burdick ◽  
Eduardo Pino ◽  
Denise Gabel-Comeau ◽  
Andrea McCoy ◽  
Carol Gu ◽  
...  

BackgroundSevere sepsis and septic shock are among the leading causes of death in the USA. While early prediction of severe sepsis can reduce adverse patient outcomes, sepsis remains one of the most expensive conditions to diagnose and treat.ObjectiveThe purpose of this study was to evaluate the effect of a machine learning algorithm for severe sepsis prediction on in-hospital mortality, hospital length of stay and 30-day readmission.DesignProspective clinical outcomes evaluation.SettingEvaluation was performed on a multiyear, multicentre clinical data set of real-world data containing 75 147 patient encounters from nine hospitals across the continental USA, ranging from community hospitals to large academic medical centres.ParticipantsAnalyses were performed for 17 758 adult patients who met two or more systemic inflammatory response syndrome criteria at any point during their stay (‘sepsis-related’ patients).InterventionsMachine learning algorithm for severe sepsis prediction.Outcome measuresIn-hospital mortality, length of stay and 30-day readmission rates.ResultsHospitals saw an average 39.5% reduction of in-hospital mortality, a 32.3% reduction in hospital length of stay and a 22.7% reduction in 30-day readmission rate for sepsis-related patient stays when using the machine learning algorithm in clinical outcomes analysis.ConclusionsReductions of in-hospital mortality, hospital length of stay and 30-day readmissions were observed in real-world clinical use of the machine learning-based algorithm. The predictive algorithm may be successfully used to improve sepsis-related outcomes in live clinical settings.Trial registration numberNCT03960203


2018 ◽  
Author(s):  
Hoyt Burdick ◽  
Eduardo Pino ◽  
Denise Gabel-Comeau ◽  
Andrea McCoy ◽  
Carol Gu ◽  
...  

AbstractObjectiveTo validate performance of a machine learning algorithm for severe sepsis determination up to 48 hours before onset, and to evaluate the effect of the algorithm on in-hospital mortality, hospital length of stay, and 30-day readmission.SettingThis cohort study includes a combined retrospective analysis and clinical outcomes evaluation: a dataset containing 510,497 patient encounters from 461 United States health centers for retrospective analysis, and a multiyear, multicenter clinical data set of real-world data containing 75,147 patient encounters from nine hospitals for clinical outcomes evaluation.ParticipantsFor retrospective analysis, 270,438 adult patients with at least one documented measurement of five out of six vital sign measurements were included. For clinical outcomes analysis, 17,758 adult patients who met two or more Systemic Inflammatory Response Syndrome (SIRS) criteria at any point during their stay were included.ResultsAt severe sepsis onset, the MLA demonstrated an AUROC of 0.91 (95% CI 0.90, 0.92), which exceeded those of MEWS (0.71, P<001), SOFA (0.74; P<.001), and SIRS (0.62; P<.001). For severe sepsis prediction 48 hours in advance of onset, the MLA achieved an AUROC of 0.77 (95% CI 0.73, 0.80). For the clinical outcomes study, when using the MLA, hospitals saw an average 39.5% reduction of in-hospital mortality, a 32.3% reduction in hospital length of stay, and a 22.7% reduction in 30-day readmission rate.ConclusionsThe MLA accurately predicts severe sepsis onset up to 48 hours in advance using only readily available vital signs in retrospective validation. Reductions of in-hospital mortality, hospital length of stay, and 30-day readmissions were observed in real-world clinical use of the MLA. Results suggest this system may improve severe sepsis detection and patient outcomes over the use of rules-based sepsis detection systems.KEY POINTSQuestionIs a machine learning algorithm capable of accurate severe sepsis prediction, and does its clinical implementation improve patient mortality rates, hospital length of stay, and 30-day readmission rates?FindingsIn a retrospective analysis that included datasets containing a total of 585,644 patient encounters from 461 hospitals, the machine learning algorithm demonstrated an AUROC of 0.93 at time of severe sepsis onset, which exceeded those of MEWS (0.71), SOFA (0.74), and SIRS (0.62); and an AUROC of 0.77 for severe sepsis prediction 48 hours in advance of onset. In an analysis of real-world data from nine hospitals across 75,147 patient encounters, use of the machine learning algorithm was associated with a 39.5% reduction in in-hospital mortality, a 32.3% reduction in hospital length of stay, and a 22.7% reduction in 30-day readmission rate.MeaningThe accurate and predictive nature of this algorithm may encourage early recognition of patients trending toward severe sepsis, and therefore improve sepsis related outcomes.STRENGTHS AND LIMITATIONS OF THIS STUDYA retrospective study of machine learning severe sepsis prediction from a dataset with 510,497 patient encounters demonstrates high accuracy up to 48 hours prior to onset.A multicenter clinical study of real-world data using this machine learning algorithm for severe sepsis alerts achieved reductions of in-hospital mortality, length of stay, and 30-day readmissions.The required presence of an ICD-9 code to classify a patient as severely septic in our retrospective analysis potentially limits our ability to accurately classify all patients.Only adults in US hospitals were included in this study.For the real-world section of the study, we cannot eliminate the possibility that implementation of a sepsis algorithm raised general awareness of sepsis within a hospital, which may lead to higher recognition of septic patients, independent of algorithm performance.


2021 ◽  
Vol 2069 (1) ◽  
pp. 012153
Author(s):  
Rania Labib

Abstract Architects often investigate the daylighting performance of hundreds of design solutions and configurations to ensure an energy-efficient solution for their designs. To shorten the time required for daylighting simulations, architects usually reduce the number of variables or parameters of the building and facade design. This practice usually results in the elimination of design variables that could contribute to an energy-optimized design configuration. Therefore, recent research has focused on incorporating machine learning algorithms that require the execution of only a relatively small subset of the simulations to predict the daylighting and energy performance of buildings. Although machine learning has been shown to be accurate, it still becomes a time-consuming process due to the time required to execute a set of simulations to be used as training and validation data. Furthermore, to save time, designers often decide to use a small simulation subset, which leads to a poorly designed machine learning algorithm that produces inaccurate results. Therefore, this study aims to introduce an automated framework that utilizes high performance computing (HPC) to execute the simulations necessary for the machine learning algorithm while saving time and effort. High performance computing facilitates the execution of thousands of tasks simultaneously for a time-efficient simulation process, therefore allowing designers to increase the size of the simulation’s subset. Pairing high performance computing with machine learning allows for accurate and nearly instantaneous building performance predictions.


This chapter presents the theory and procedures behind supervised machine learning and how genetic programming can be applied to be an effective machine learning algorithm. Due to simple and powerful concept of computer programs, genetic programming can solve many supervised machine learning problems, especially regression and classifications. The chapter starts with theory of supervised machine learning by describing the three main groups of modelling: regression, binary, and multiclass classification. Through those kinds of modelling, the most important performance parameters and skill scores are introduced. The chapter also describes procedures of the model evaluation and construction of confusion matrix for binary and multiclass classification. The second part describes in detail how to use genetic programming in order to build high performance GP models for regression and classifications. It also describes the procedure of generating computer programs for binary and multiclass calcification problems by introducing the concept of predefined root node.


Sign in / Sign up

Export Citation Format

Share Document