Advances in artificial intelligence and deep learning systems in ICU-related acute kidney injury

2021 ◽  
Vol 27 (6) ◽  
pp. 560-572 ◽  
Author(s):  
Tezcan Ozrazgat-Baslanti ◽  
Tyler J. Loftus ◽  
Yuanfang Ren ◽  
Matthew M. Ruppert ◽  
Azra Bihorac
2019 ◽  
Vol 35 (2) ◽  
pp. 204-205 ◽  
Author(s):  
Wim Van Biesen ◽  
Jill Vanmassenhove ◽  
Johan Decruyenaere

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Xing Song ◽  
Alan S. L. Yu ◽  
John A. Kellum ◽  
Lemuel R. Waitman ◽  
Michael E. Matheny ◽  
...  

Abstract Artificial intelligence (AI) has demonstrated promise in predicting acute kidney injury (AKI), however, clinical adoption of these models requires interpretability and transportability. Non-interoperable data across hospitals is a major barrier to model transportability. Here, we leverage the US PCORnet platform to develop an AKI prediction model and assess its transportability across six independent health systems. Our work demonstrates that cross-site performance deterioration is likely and reveals heterogeneity of risk factors across populations to be the cause. Therefore, no matter how accurate an AI model is trained at the source hospital, whether it can be adopted at target hospitals is an unanswered question. To fill the research gap, we derive a method to predict the transportability of AI models which can accelerate the adaptation process of external AI models in hospitals.


Information ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 51 ◽  
Author(s):  
Melanie Mitchell

Today’s AI systems sorely lack the essence of human intelligence: Understanding the situations we experience, being able to grasp their meaning. The lack of humanlike understanding in machines is underscored by recent studies demonstrating lack of robustness of state-of-the-art deep-learning systems. Deeper networks and larger datasets alone are not likely to unlock AI’s “barrier of meaning”; instead the field will need to embrace its original roots as an interdisciplinary science of intelligence.


Author(s):  
Angelica Martinez Ochoa

This paper explores how the categorization of images and the searching methods in the Adobe Stock database are culturally situated practices; they are a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform. Understanding the politics behind artificial intelligence, machine learning, and deep learning systems matters now more than ever, as Adobe is already using these technologies across all their products.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
Iacopo Vagliano ◽  
Nicholas Chesnaye ◽  
Jan Hendrik Leopold ◽  
Kitty J Jager ◽  
Ameen Abu Hanna ◽  
...  

Abstract Background and Aims Acute kidney injury (AKI) has a substantial impact on global disease burden of Chronic Kidney Disease. To assist physicians with the timely diagnosis of AKI, several prognostic models have been developed to improve early recognition across various patient populations with varying degrees of predictive performance. In the prediction of AKI, machine learning (ML) techniques have been demonstrated to improve on the predictive ability of existing models that rely on more conventional statistical methods. ML is a broad term which refers to various types of models: Parametric models, such as linear or logistic regression use a pre-specified model form which is believed to fit the data, and its parameters are estimated. Non-parametric models, such as decision trees, random forests, and neural networks may have varying complexity (e.g. the depth of a classification tree model) based on the data. Deep learning neural network models exploit temporal or spatial arrangements in the data to deal with complex predictors. Given the rapid growth and development of ML methods and models for AKI prediction over the past years, in this systematic review, we aim to appraise the current state-of-the-art regarding ML models for the prediction of AKI. To this end, we focus on model performance, model development methods, model evaluation, and methodological limitations. Method We searched the PubMed and ArXiv digital libraries, and selected studies that develop or validate an AKI-related multivariable ML prediction model. We extracted data using a data extraction form based on the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) and CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklists. Results Overall, 2,875 titles were screened and thirty-four studies were included. Of those, thirteen studies focussed on intensive care, for which the US derived MIMIC dataset was commonly used; thirty-one studies both developed and validated a model; twenty-one studies used single-centre data. Non-parametric ML methods were used more often than regression and deep learning. Random forests was the most popular method, and often performed best in model comparisons. Deep learning was typically used (and also effective) when complex features were included (e.g., with text or time series). Internal validation was often applied, and the performance of ML models was usually compared against logistic regression. However, the simple training/test split was often used, which does not account for the variability of the training and test samples. Calibration, external validation, and interpretability of results were rarely considered. Comparisons of model performance against medical scores or clinicians were also rare. Reproducibility was limited, as data and code were usually unavailable. Conclusion There is an increasing number of ML models for AKI, which are mostly developed in the intensive care environment largely due to the availability of the MIMIC dataset. Most studies are single-centre, and lack a prospective design. More complex models based on deep learning are emerging, with the potential to improve predictions for complex data, such as time-series, but with the disadvantage of being less interpretable. Future studies should pay attention to using calibration measures, external validation, and on improving model interpretability, in order to improve uptake in clinical practice. Finally, sharing data and code could improve reproducibility of study findings.


2017 ◽  
Vol 40 ◽  
Author(s):  
Pierre-Yves Oudeyer

AbstractAutonomous lifelong development and learning are fundamental capabilities of humans, differentiating them from current deep learning systems. However, other branches of artificial intelligence have designed crucial ingredients towards autonomous learning: curiosity and intrinsic motivation, social learning and natural interaction with peers, and embodiment. These mechanisms guide exploration and autonomous choice of goals, and integrating them with deep learning opens stimulating perspectives.


Sign in / Sign up

Export Citation Format

Share Document