scholarly journals Using machine learning techniques to develop risk prediction models to predict graft failure following kidney transplantation: protocol for a retrospective cohort study

F1000Research ◽  
2020 ◽  
Vol 8 ◽  
pp. 1810
Author(s):  
Sameera Senanayake ◽  
Adrian Barnett ◽  
Nicholas Graves ◽  
Helen Healy ◽  
Keshwar Baboolal ◽  
...  

Background: A mechanism to predict graft failure before the actual kidney transplantation occurs is crucial to clinical management of chronic kidney disease patients.  Several kidney graft outcome prediction models, developed using machine learning methods, are available in the literature.  However, most of those models used small datasets and none of the machine learning-based prediction models available in the medical literature modelled time-to-event (survival) information, but instead used the binary outcome of failure or not. The objective of this study is to develop two separate machine learning-based predictive models to predict graft failure following live and deceased donor kidney transplant, using time-to-event data in a large national dataset from Australia.   Methods: The dataset provided by the Australia and New Zealand Dialysis and Transplant Registry will be used for the analysis. This retrospective dataset contains the cohort of patients who underwent a kidney transplant in Australia from January 1 st, 2007, to December 31 st, 2017. This included 3,758 live donor transplants and 7,365 deceased donor transplants. Three machine learning methods (survival tree, random survival forest and survival support vector machine) and one traditional regression method, Cox proportional regression, will be used to develop the two predictive models (for live donor and deceased donor transplants). The best predictive model will be selected based on the model’s performance. Discussion: This protocol describes the development of two separate machine learning-based predictive models to predict graft failure following live and deceased donor kidney transplant, using a large national dataset from Australia. Furthermore, these two models will be the most comprehensive kidney graft failure predictive models that have used survival data to model using machine learning techniques. Thus, these models are expected to provide valuable insight into the complex interactions between graft failure and donor and recipient characteristics.

F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 1810 ◽  
Author(s):  
Sameera Senanayake ◽  
Adrian Barnett ◽  
Nicholas Graves ◽  
Helen Healy ◽  
Keshwar Baboolal ◽  
...  

Background: A mechanism to predict graft failure before the actual kidney transplantation occurs is crucial to clinical management of chronic kidney disease patients.  Several kidney graft outcome prediction models, developed using machine learning methods, are available in the literature.  However, most of those models used small datasets and none of the machine learning-based prediction models available in the medical literature modelled time-to-event (survival) information, but instead used the binary outcome of failure or not. The objective of this study is to develop two separate machine learning-based predictive models to predict graft failure following live and deceased donor kidney transplant, using time-to-event data in a large national dataset from Australia.   Methods: The dataset provided by the Australia and New Zealand Dialysis and Transplant Registry will be used for the analysis. This retrospective dataset contains the cohort of patients who underwent a kidney transplant in Australia from January 1st, 2007, to December 31st, 2017.  This included 3,758 live donor transplants and 7,365 deceased donor transplants.  Three machine learning methods (survival tree, random survival forest and survival support vector machine) and one traditional regression method, Cox proportional regression, will be used to develop the two predictive models.  The best predictive model will be selected based on the model’s performance. Discussion: This protocol describes the development of two separate machine learning-based predictive models to predict graft failure following live and deceased donor kidney transplant, using a large national dataset from Australia.   Furthermore, these two models will be the most comprehensive kidney graft failure predictive models that have used survival data to model using machine learning techniques.  Thus, these models are expected to provide valuable insight into the complex interactions between graft failure and donor and recipient characteristics.


2021 ◽  
Author(s):  
Sameera Senanayake ◽  
Sanjeewa Kularatna ◽  
Helen Healy ◽  
Nicholas Graves ◽  
Keshwar Baboolal ◽  
...  

Abstract BackgroundKidney graft failure risk prediction models assist evidence-based medical decision-making in clinical practice. Our objective was to develop and validate statistical and machine learning predictive models to predict death-censored graft failure following deceased donor kidney transplant, using time-to-event (survival) data in a large national dataset from Australia. MethodsData included donor and recipient characteristics (n=98) of 7,365 deceased donor transplants from January 1st, 2007 to December 31st, 2017 conducted in Australia. Seven variable selection methods were used to identify the most important independent variables included in the model. Predictive models were developed using: survival tree, random survival forest, survival support vector machine and Cox proportional regression. The models were trained using 70% of the data and validated using the rest of the data (30%). The model with best discriminatory power, assessed using concordance index (C-index) was chosen as the best model. ResultsTwo models, developed using cox regression and random survival forest, had the highest C-index (0.67) in discriminating death-censored graft failure. The best fitting Cox model used seven independent variables and showed moderate level of prediction accuracy (calibration). ConclusionThis index displays sufficient robustness to be used in pre-transplant decision making and may perform better than currently available tools.


2011 ◽  
Vol 2011 ◽  
pp. 1-7 ◽  
Author(s):  
Douglas Scott Keith ◽  
James T. Patrie

Background. H-Y antigen incompatibility adversely impacts bone marrow transplants however, the relevance of these antigens in kidney transplantation is uncertain. Three previous retrospective studies of kidney transplant databases have produced conflicting results.Methods. This study analyzed the Organ Procurement and Transplantation Network database between 1997 and 2009 using male deceased donor kidney transplant pairs in which the recipient genders were discordant. Death censored graft survival at six months, five, and ten years, treated acute rejection at six months and one year, and rates of graft failure by cause were the primary endpoints analyzed.Results. Death censored graft survival at six months was significantly worse for female recipients. Analysis of the causes of graft failure at six months revealed that the difference in death censored graft survival was due primarily to nonimmunologic graft failures. The adjusted and unadjusted death censored graft survivals at five and ten years were similar between the two genders as were the rates of immunologic graft failure. No difference in the rates of treated acute rejection at six months and one year was seen between the two genders.Conclusions. Male donor to female recipient discordance had no discernable effect on immunologically mediated kidney graft outcomes in the era of modern immunosuppression.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Sameera Senanayake ◽  
Sanjeewa Kularatna ◽  
Helen Healy ◽  
Nicholas Graves ◽  
Keshwar Baboolal ◽  
...  

Abstract Background Kidney graft failure risk prediction models assist evidence-based medical decision-making in clinical practice. Our objective was to develop and validate statistical and machine learning predictive models to predict death-censored graft failure following deceased donor kidney transplant, using time-to-event (survival) data in a large national dataset from Australia. Methods Data included donor and recipient characteristics (n = 98) of 7,365 deceased donor transplants from January 1st, 2007 to December 31st, 2017 conducted in Australia. Seven variable selection methods were used to identify the most important independent variables included in the model. Predictive models were developed using: survival tree, random survival forest, survival support vector machine and Cox proportional regression. The models were trained using 70% of the data and validated using the rest of the data (30%). The model with best discriminatory power, assessed using concordance index (C-index) was chosen as the best model. Results Two models, developed using cox regression and random survival forest, had the highest C-index (0.67) in discriminating death-censored graft failure. The best fitting Cox model used seven independent variables and showed moderate level of prediction accuracy (calibration). Conclusion This index displays sufficient robustness to be used in pre-transplant decision making and may perform better than currently available tools.


2020 ◽  
Vol 16 ◽  
Author(s):  
Nitigya Sambyal ◽  
Poonam Saini ◽  
Rupali Syal

Background and Introduction: Diabetes mellitus is a metabolic disorder that has emerged as a serious public health issue worldwide. According to the World Health Organization (WHO), without interventions, the number of diabetic incidences is expected to be at least 629 million by 2045. Uncontrolled diabetes gradually leads to progressive damage to eyes, heart, kidneys, blood vessels and nerves. Method: The paper presents a critical review of existing statistical and Artificial Intelligence (AI) based machine learning techniques with respect to DM complications namely retinopathy, neuropathy and nephropathy. The statistical and machine learning analytic techniques are used to structure the subsequent content review. Result: It has been inferred that statistical analysis can help only in inferential and descriptive analysis whereas, AI based machine learning models can even provide actionable prediction models for faster and accurate diagnose of complications associated with DM. Conclusion: The integration of AI based analytics techniques like machine learning and deep learning in clinical medicine will result in improved disease management through faster disease detection and cost reduction for disease treatment.


2020 ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E Braat ◽  
...  

Abstract Background: Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians.Methods: In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques.Results: Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years.Conclusion: In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables.


2021 ◽  
Vol 297 ◽  
pp. 01073
Author(s):  
Sabyasachi Pramanik ◽  
K. Martin Sagayam ◽  
Om Prakash Jena

Cancer has been described as a diverse illness with several distinct subtypes that may occur simultaneously. As a result, early detection and forecast of cancer types have graced essentially in cancer fact-finding methods since they may help to improve the clinical treatment of cancer survivors. The significance of categorizing cancer suffers into higher or lower-threat categories has prompted numerous fact-finding associates from the bioscience and genomics field to investigate the utilization of machine learning (ML) algorithms in cancer diagnosis and treatment. Because of this, these methods have been used with the goal of simulating the development and treatment of malignant diseases in humans. Furthermore, the capacity of machine learning techniques to identify important characteristics from complicated datasets demonstrates the significance of these technologies. These technologies include Bayesian networks and artificial neural networks, along with a number of other approaches. Decision Trees and Support Vector Machines which have already been extensively used in cancer research for the creation of predictive models, also lead to accurate decision making. The application of machine learning techniques may undoubtedly enhance our knowledge of cancer development; nevertheless, a sufficient degree of validation is required before these approaches can be considered for use in daily clinical practice. An overview of current machine learning approaches utilized in the simulation of cancer development is presented in this paper. All of the supervised machine learning approaches described here, along with a variety of input characteristics and data samples, are used to build the prediction models. In light of the increasing trend towards the use of machine learning methods in biomedical research, we offer the most current papers that have used these approaches to predict risk of cancer or patient outcomes in order to better understand cancer.


Author(s):  
Simon Ville ◽  
Marine Lorent ◽  
Clarisse Kerleau ◽  
Anders Asberg ◽  
Christophe Legendre ◽  
...  

BackgroundThe recognition that metabolism and immune function are regulated by an endogenous molecular clock generating circadian rhythms suggests that the magnitude of ischemia-reperfusion and subsequent inflammation on kidney transplantation, could be affected by the time of the day. MethodsAccordingly, we evaluated 5026 first kidney transplant recipients from deceased heart-beating donors. In a cause-specific multivariable analysis, we compare delayed graft function (DGF) and graft survival according to the time of kidney clamping and declamping. Participants were divided into clamping between midnight and noon (AM clamping group, 65%) or clamping between noon and midnight (PM clamping group, 35%), and similarly, AM declamping or PM declamping (25% / 75%). ResultsDGF occurred among 550 participants (27%) with AM clamping and 339 (34%) with PM clamping (adjusted OR = 0.81, 95%CI: 0.67 to 0.98, p= 0.03). No significant association of clamping time with overall death censored graft survival was observed (HR = 0.92, 95%CI: 0.77 to 1.10, p= 0.37). No significant association of declamping time with DGF or graft survival was observed. ConclusionsClamping between midnight and noon was associated with a lower incidence of DGF whilst the declamping time was not associated with kidney graft outcomes.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E. Braat ◽  
...  

Abstract Background Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians. Methods In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques. Results Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years. Conclusion In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables. Trial registration Retrospective data were provided by the Scientific Registry of Transplant Recipients under Data Use Agreement number 9477 for analysis of risk factors after liver transplantation.


Sign in / Sign up

Export Citation Format

Share Document