An Assessment of Federated Machine Learning for Translational Research

Author(s):  
Manoj A. Thomas ◽  
Diya Suzanne Abraham ◽  
Dapeng Liu

Translational research (TR) is the harnessing of knowledge from basic science and clinical research to advance healthcare. As a sister discipline, translational informatics (TI) concerns the application of informatics theories, methods, and frameworks to TR. This chapter builds upon TR concepts and aims to advance the use of machine learning (ML) and data analytics for improving clinical decision support. A federated machine learning (FML) architecture is described to aggregate multiple ML endpoints, and intermediate data analytic processes and products to output high quality knowledge discovery and decision making. The proposed architecture is evaluated for its operational performance based on three propositions, and a case for clinical decision support in the prediction of adult Sepsis is presented. The chapter illustrates contributions to the advancement of FML and TI.

2020 ◽  
Author(s):  
Victor Silva ◽  
Amanda Days Ramos Novo ◽  
Damires Souza ◽  
Alex Rêgo

Clinical decision support systems is a research area in which Machine Learning (ML) techniques can be applied. Nevertheless, specifically in assisting pneumonia decision making, the use of ML has not been so expressive. To help matters, this work aims to contribute to the evolution of the intersection of such areas by presenting a Systematic Review of the Literature. It provides results which may help to identify, interpret and evaluate how ML techniques have been applied and some research enhancements yet to be done.


2021 ◽  
Vol 11 (11) ◽  
pp. 5088
Author(s):  
Anna Markella Antoniadi ◽  
Yuhan Du ◽  
Yasmine Guendouz ◽  
Lan Wei ◽  
Claudia Mazo ◽  
...  

Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of Clinical Decision Support Systems (CDSSs). These systems support medical practitioners in their clinic decision-making and in the absence of explainability may lead to issues of under or over-reliance. Providing explanations for how recommendations are arrived at will allow practitioners to make more nuanced, and in some cases, life-saving decisions. The need for XAI in CDSS, and the medical field in general, is amplified by the need for ethical and fair decision-making and the fact that AI trained with historical data can be a reinforcement agent of historical actions and biases that should be uncovered. We performed a systematic literature review of work to-date in the application of XAI in CDSS. Tabular data processing XAI-enabled systems are the most common, while XAI-enabled CDSS for text analysis are the least common in literature. There is more interest in developers for the provision of local explanations, while there was almost a balance between post-hoc and ante-hoc explanations, as well as between model-specific and model-agnostic techniques. Studies reported benefits of the use of XAI such as the fact that it could enhance decision confidence for clinicians, or generate the hypothesis about causality, which ultimately leads to increased trustworthiness and acceptability of the system and potential for its incorporation in the clinical workflow. However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. We propose some guidelines for the implementation of XAI in CDSS and explore some opportunities, challenges, and future research needs.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii135-ii136
Author(s):  
John Lin ◽  
Michelle Mai ◽  
Saba Paracha

Abstract Glioblastoma multiforme (GBM), the most common form of glioma, is a malignant tumor with a high risk of mortality. By providing accurate survival estimates, prognostic models have been identified as promising tools in clinical decision support. In this study, we produced and validated two machine learning-based models to predict survival time for GBM patients. Publicly available clinical and genomic data from The Cancer Genome Atlas (TCGA) and Broad Institute GDAC Firehouse were obtained through cBioPortal. Random forest and multivariate regression models were created to predict survival. Predictive accuracy was assessed and compared through mean absolute error (MAE) and root mean square error (RMSE) calculations. 619 GBM patients were included in the dataset. There were 381 (62.9%) cases of recurrence/progression and 53 (8.7%) cases of disease-free survival. The MAE and RMSE values were 0.553 and 0.887 years respectively for the random forest regression model, and they were 1.756 and 2.451 years respectively for the multivariate regression model. Both models accurately predicted overall survival. Comparison of models through MAE, RMSE, and visual analysis produced higher accuracy values for random forest than multivariate linear regression. Further investigation on feature selection and model optimization may improve predictive power. These findings suggest that using machine learning in GBM prognostic modeling will improve clinical decision support. *Co-first authors.


2018 ◽  
Vol 35 (14) ◽  
pp. 2458-2465 ◽  
Author(s):  
Johanna Schwarz ◽  
Dominik Heider

Abstract Motivation Clinical decision support systems have been applied in numerous fields, ranging from cancer survival toward drug resistance prediction. Nevertheless, clinical decision support systems typically have a caveat: many of them are perceived as black-boxes by non-experts and, unfortunately, the obtained scores cannot usually be interpreted as class probability estimates. In probability-focused medical applications, it is not sufficient to perform well with regards to discrimination and, consequently, various calibration methods have been developed to enable probabilistic interpretation. The aims of this study were (i) to develop a tool for fast and comparative analysis of different calibration methods, (ii) to demonstrate their limitations for the use on clinical data and (iii) to introduce our novel method GUESS. Results We compared the performances of two different state-of-the-art calibration methods, namely histogram binning and Bayesian Binning in Quantiles, as well as our novel method GUESS on both, simulated and real-world datasets. GUESS demonstrated calibration performance comparable to the state-of-the-art methods and always retained accurate class discrimination. GUESS showed superior calibration performance in small datasets and therefore may be an optimal calibration method for typical clinical datasets. Moreover, we provide a framework (CalibratR) for R, which can be used to identify the most suitable calibration method for novel datasets in a timely and efficient manner. Using calibrated probability estimates instead of original classifier scores will contribute to the acceptance and dissemination of machine learning based classification models in cost-sensitive applications, such as clinical research. Availability and implementation GUESS as part of CalibratR can be downloaded at CRAN.


2021 ◽  
Vol 37 (S1) ◽  
pp. 21-22
Author(s):  
Carla Fernandez-Barceló ◽  
Elena Calvo-Cidoncha ◽  
Laura Sampietro-Colom

IntroductionIn the past decade, health technology assessment (HTA) has narrowed its scope to the analysis of mainly clinical and economic benefits. However, twenty-first century technology challenges require the need for more holistic assessments to obtain accurate recommendations for decision-making, as it was in HTA's foundations. VALues In Doing Assessments of health TEchnologies (VALIDATE) methodology approaches complex technologies holistically to provide a deeper understanding of the problem through analysis of the heterogeneity of stakeholders’ views, allowing for more comprehensive HTAs. This study aimed to assess a pharmaceutical clinical decision support system (CDSS) using VALIDATE.MethodsA systematic review of the empirical evidence on CDSS was conducted according to PRISMA guidelines. PubMed, the Cochrane Library, and Web of Science databases were searched for literature published between 2000 and 2020. Additionally, a review of grey literature and semi-structured interviews with different hospital stakeholders (pharmacists, physicians, computer engineers, etc.) were conducted. Content analysis was used for data integration.ResultsPreliminary literature results indicated consensus regarding the effectiveness of CDSS. Nevertheless, when including multistakeholder views, CDSS appeared to not be fully accepted in clinical practice. The main reasons for this appeared to be alert fatigue and disruption of workflow. Preliminary results based on information from the literature were contrasted with stakeholder interview responses.ConclusionsIncorporation of facts and stakeholder values into the problem definition and scoping for a health technology is essential to properly conduct HTAs. The lack of an inclusive multistakeholder scoping can lead to inaccurate information, and in this particular case to suboptimal CDSS implementation concerning decision-making for the technology being evaluated.


Sign in / Sign up

Export Citation Format

Share Document