scholarly journals Promoting Prognostic Model Application: A Review Based on Gliomas

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Xisong Liang ◽  
Zeyu Wang ◽  
Ziyu Dai ◽  
Hao Zhang ◽  
Quan Cheng ◽  
...  

Malignant neoplasms are characterized by poor therapeutic efficacy, high recurrence rate, and extensive metastasis, leading to short survival. Previous methods for grouping prognostic risks are based on anatomic, clinical, and pathological features that exhibit lower distinguishing capability compared with genetic signatures. The update of sequencing techniques and machine learning promotes the genetic panels-based prognostic model development, especially the RNA-panel models. Gliomas harbor the most malignant features and the poorest survival among all tumors. Currently, numerous glioma prognostic models have been reported. We systematically reviewed all 138 machine-learning-based genetic models and proposed novel criteria in assessing their quality. Besides, the biological and clinical significance of some highly overlapped glioma markers in these models were discussed. This study screened out markers with strong prognostic potential and 27 models presenting high quality. Conclusively, we comprehensively reviewed 138 prognostic models combined with glioma genetic panels and presented novel criteria for the development and assessment of clinically important prognostic models. This will guide the genetic models in cancers from laboratory-based research studies to clinical applications and improve glioma patient prognostic management.

Author(s):  
Mihaela van der Schaar ◽  
Harry Hemingway

Machine learning offers an alternative to the methods for prognosis research in large and complex datasets and for delivering dynamic models of prognosis. Machine learning foregrounds the capacity to learn from large and complex data about the pathways, predictors, and trajectories of health outcomes in individuals. This reflects wider societal drives for data-driven modelling embedded and automated within powerful computers to analyse large amounts of data. Machine learning derives algorithms that can learn from data and can allow the data full freedom, for example, to follow a pragmatic approach in developing a prognostic model. Rather than choosing factors for model development in advance, machine learning allows the data to reveal which features are important for which predictions. This chapter introduces key machine learning concepts relevant to each of the four prognosis research types, explains where it may enhance prognosis research, and highlights challenges.


Blood ◽  
2011 ◽  
Vol 118 (21) ◽  
pp. 499-499 ◽  
Author(s):  
Theresa Hahn ◽  
Philip L. McCarthy ◽  
Jeanette Carreras ◽  
Mei-Jie Zhang ◽  
Hillard M. Lazarus ◽  
...  

Abstract Abstract 499 AHCT is standard therapy for relapsed or refractory HL. Published prognostic models for HL patients based on factors measured at the time of AHCT have been limited by small sample sizes. HL prognostic models based on information from diagnosis may be difficult to use for AHCT outcomes since diagnostic information is often not available to the tertiary transplant center or the tests were not uniformly performed by multiple referring physicians. Our goal was to develop a new prognostic model for PFS post-AHCT based on factors available at time of AHCT. We analyzed a cohort of 728 relapsed or refractory HL patients receiving an AHCT between 1996–2007, reported to the CIBMTR by 162 centers, who had complete data for all significant factors previously reported in prognostic models. Patient characteristics at diagnosis: 40% male, 52% stage III-IV, 57% B symptoms, 34% extranodal disease. Patient characteristics at AHCT: median (range) age 33 (7–74) years; 74% KPS≥90 pre-AHCT; 40% had ≥3 prior chemotherapy regimens; 36% chemo-sensitive relapse 27% CR2, 19% PR1, 12% chemo-resistant relapse, 6% primary refractory/resistant; median (range) time from diagnosis to AHCT 22 (3–368) months. Histologic types were: 74% nodular sclerosis, 14% mixed cellularity, 7% lymphocyte rich, 1% lymphocyte depleted, 4% other/unknown. High dose therapy regimens were primarily BEAM (71%) or CBV (13%). For the entire cohort, 3-year estimates of PFS and OS were 60% and 73%, respectively. Multivariate models for treatment failure (1-PFS) were built using a forward step-wise procedure with p<0.05 to enter the model. The following variables were considered: number of prior chemotherapy regimens; KPS; histology; B symptoms at diagnosis; disease status at AHCT; chemo-sensitivity at AHCT; serum LDH at AHCT; extranodal involvement any time prior to AHCT; size of largest mass prior to AHCT; time from diagnosis to AHCT. A random subset of patients was used for model development (n=337) and the model was validated in the remaining cases (n= 391). The final model is shown in the TableRisk FactorRR (95% CI)PScore# of prior chemotherapy regimens: (3,4,5) vs (0,1,2)1.80 (1.31–2.47)0.00032Extranodal involvement any time prior to AHCT: Yes vs No1.77 (1.24–2.53)0.00182KPS prior to AHCT: 0–80% vs 90–100%1.47 (1.04–2.07)0.02751HL chemo-sensitivity at AHCT: Resistant vs Sensitive1.45 (1.01–2.07)0.04401 Patients were assigned a risk group based on the prognostic score: High risk, (score = 4, 5, or 6); Intermediate risk, (score = 1, 2, or 3); and Low risk, (score = 0). Figure 1 shows the PFS curves for the model development, model verification and combined groups, respectively. This CIBMTR Prognostic Model identifies patients at low, intermediate and high risk for treatment failure (progression or death). These risk groups discriminate patients with good post-AHCT outcomes and those who may benefit from other therapies, such as allogeneic HCT. Prospective evaluation of different treatment strategies based on this prognostic model are needed on a national or international level. Disclosures: Hahn: Novartis: stock. Montoto:Genentech: Research Funding; Roche: Honoraria.


2018 ◽  
pp. 1-13 ◽  
Author(s):  
Jorne L. Biccler ◽  
Sandra Eloranta ◽  
Peter de Nully Brown ◽  
Henrik Frederiksen ◽  
Mats Jerkeman ◽  
...  

Purpose Prognostic models for diffuse large B-cell lymphoma (DLBCL), such as the International Prognostic Index (IPI) are widely used in clinical practice. The models are typically developed with simplicity in mind and thus do not exploit the full potential of detailed clinical data. This study investigated whether nationwide lymphoma registries containing clinical data and machine learning techniques could prove to be useful for building modern prognostic tools. Patients and Methods This study was based on nationwide lymphoma registries from Denmark and Sweden, which include large amounts of clinicopathologic data. Using the Danish DLBCL cohort, a stacking approach was used to build a new prognostic model that leverages the strengths of different survival models. To compare the performance of the stacking approach with established prognostic models, cross-validation was used to estimate the concordance index (C-index), time-varying area under the curve, and integrated Brier score. Finally, the generalizability was tested by applying the new model to the Swedish cohort. Results In total, 2,759 and 2,414 patients were included from the Danish and Swedish cohorts, respectively. In the Danish cohort, the stacking approach led to the lowest integrated Brier score, indicating that the survival curves obtained from the stacking model fitted the observed survival the best. The C-index and time-varying area under the curve indicated that the stacked model (C-index: Denmark [DK], 0.756; Sweden [SE], 0.744) had good discriminative capabilities compared with the other considered prognostic models (IPI: DK, 0.662; SE, 0.661; and National Comprehensive Cancer Network–IPI: DK, 0.681; SE, 0.681). Furthermore, these results were reproducible in the independent Swedish cohort. Conclusion A new prognostic model based on machine learning techniques was developed and was shown to significantly outperform established prognostic indices for DLBCL. The model is available at https://lymphomapredictor.org .


2021 ◽  
Vol 9 ◽  
Author(s):  
Young-Tak Kim ◽  
Hakseung Kim ◽  
Choel-Hui Lee ◽  
Byung C. Yoon ◽  
Jung Bin Kim ◽  
...  

Background: The inter- and intrarater variability of conventional computed tomography (CT) classification systems for evaluating the extent of ischemic-edematous insult following traumatic brain injury (TBI) may hinder the robustness of TBI prognostic models.Objective: This study aimed to employ fully automated quantitative densitometric CT parameters and a cutting-edge machine learning algorithm to construct a robust prognostic model for pediatric TBI.Methods: Fifty-eight pediatric patients with TBI who underwent brain CT were retrospectively analyzed. Intracranial densitometric information was derived from the supratentorial region as a distribution representing the proportion of Hounsfield units. Furthermore, a machine learning-based prognostic model based on gradient boosting (i.e., CatBoost) was constructed with leave-one-out cross-validation. At discharge, the outcome was assessed dichotomously with the Glasgow Outcome Scale (favorability: 1–3 vs. 4–5). In-hospital mortality, length of stay (&gt;1 week), and need for surgery were further evaluated as alternative TBI outcome measures.Results: Densitometric parameters indicating reduced brain density due to subtle global ischemic changes were significantly different among the TBI outcome groups, except for need for surgery. The skewed intracranial densitometry of the unfavorable outcome became more distinguishable in the follow-up CT within 48 h. The prognostic model augmented by intracranial densitometric information achieved adequate AUCs for various outcome measures [favorability = 0.83 (95% CI: 0.72–0.94), in-hospital mortality = 0.91 (95% CI: 0.82–1.00), length of stay = 0.83 (95% CI: 0.72–0.94), and need for surgery = 0.71 (95% CI: 0.56–0.86)], and this model showed enhanced performance compared to the conventional CRASH-CT model.Conclusion: Densitometric parameters indicative of global ischemic changes during the acute phase of TBI are predictive of a worse outcome in pediatric patients. The robustness and predictive capacity of conventional TBI prognostic models might be significantly enhanced by incorporating densitometric parameters and machine learning techniques.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Muhammad Javed Iqbal ◽  
Zeeshan Javed ◽  
Haleema Sadia ◽  
Ijaz A. Qureshi ◽  
Asma Irshad ◽  
...  

AbstractArtificial intelligence (AI) is the use of mathematical algorithms to mimic human cognitive abilities and to address difficult healthcare challenges including complex biological abnormalities like cancer. The exponential growth of AI in the last decade is evidenced to be the potential platform for optimal decision-making by super-intelligence, where the human mind is limited to process huge data in a narrow time range. Cancer is a complex and multifaced disorder with thousands of genetic and epigenetic variations. AI-based algorithms hold great promise to pave the way to identify these genetic mutations and aberrant protein interactions at a very early stage. Modern biomedical research is also focused to bring AI technology to the clinics safely and ethically. AI-based assistance to pathologists and physicians could be the great leap forward towards prediction for disease risk, diagnosis, prognosis, and treatments. Clinical applications of AI and Machine Learning (ML) in cancer diagnosis and treatment are the future of medical guidance towards faster mapping of a new treatment for every individual. By using AI base system approach, researchers can collaborate in real-time and share knowledge digitally to potentially heal millions. In this review, we focused to present game-changing technology of the future in clinics, by connecting biology with Artificial Intelligence and explain how AI-based assistance help oncologist for precise treatment.


Life ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 122
Author(s):  
Ruggiero Seccia ◽  
Silvia Romano ◽  
Marco Salvetti ◽  
Andrea Crisanti ◽  
Laura Palagi ◽  
...  

The course of multiple sclerosis begins with a relapsing-remitting phase, which evolves into a secondarily progressive form over an extremely variable period, depending on many factors, each with a subtle influence. To date, no prognostic factors or risk score have been validated to predict disease course in single individuals. This is increasingly frustrating, since several treatments can prevent relapses and slow progression, even for a long time, although the possible adverse effects are relevant, in particular for the more effective drugs. An early prediction of disease course would allow differentiation of the treatment based on the expected aggressiveness of the disease, reserving high-impact therapies for patients at greater risk. To increase prognostic capacity, approaches based on machine learning (ML) algorithms are being attempted, given the failure of other approaches. Here we review recent studies that have used clinical data, alone or with other types of data, to derive prognostic models. Several algorithms that have been used and compared are described. Although no study has proposed a clinically usable model, knowledge is building up and in the future strong tools are likely to emerge.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Kara-Louise Royle ◽  
David A. Cairns

Abstract Background The United Kingdom Myeloma Research Alliance (UK-MRA) Myeloma Risk Profile is a prognostic model for overall survival. It was trained and tested on clinical trial data, aiming to improve the stratification of transplant ineligible (TNE) patients with newly diagnosed multiple myeloma. Missing data is a common problem which affects the development and validation of prognostic models, where decisions on how to address missingness have implications on the choice of methodology. Methods Model building The training and test datasets were the TNE pathways from two large randomised multicentre, phase III clinical trials. Potential prognostic factors were identified by expert opinion. Missing data in the training dataset was imputed using multiple imputation by chained equations. Univariate analysis fitted Cox proportional hazards models in each imputed dataset with the estimates combined by Rubin’s rules. Multivariable analysis applied penalised Cox regression models, with a fixed penalty term across the imputed datasets. The estimates from each imputed dataset and bootstrap standard errors were combined by Rubin’s rules to define the prognostic model. Model assessment Calibration was assessed by visualising the observed and predicted probabilities across the imputed datasets. Discrimination was assessed by combining the prognostic separation D-statistic from each imputed dataset by Rubin’s rules. Model validation The D-statistic was applied in a bootstrap internal validation process in the training dataset and an external validation process in the test dataset, where acceptable performance was pre-specified. Development of risk groups Risk groups were defined using the tertiles of the combined prognostic index, obtained by combining the prognostic index from each imputed dataset by Rubin’s rules. Results The training dataset included 1852 patients, 1268 (68.47%) with complete case data. Ten imputed datasets were generated. Five hundred twenty patients were included in the test dataset. The D-statistic for the prognostic model was 0.840 (95% CI 0.716–0.964) in the training dataset and 0.654 (95% CI 0.497–0.811) in the test dataset and the corrected D-Statistic was 0.801. Conclusion The decision to impute missing covariate data in the training dataset influenced the methods implemented to train and test the model. To extend current literature and aid future researchers, we have presented a detailed example of one approach. Whilst our example is not without limitations, a benefit is that all of the patient information available in the training dataset was utilised to develop the model. Trial registration Both trials were registered; Myeloma IX-ISRCTN68454111, registered 21 September 2000. Myeloma XI-ISRCTN49407852, registered 24 June 2009.


Author(s):  
Mythili K. ◽  
Manish Narwaria

Quality assessment of audiovisual (AV) signals is important from the perspective of system design, optimization, and management of a modern multimedia communication system. However, automatic prediction of AV quality via the use of computational models remains challenging. In this context, machine learning (ML) appears to be an attractive alternative to the traditional approaches. This is especially when such assessment needs to be made in no-reference (i.e., the original signal is unavailable) fashion. While development of ML-based quality predictors is desirable, we argue that proper assessment and validation of such predictors is also crucial before they can be deployed in practice. To this end, we raise some fundamental questions about the current approach of ML-based model development for AV quality assessment and signal processing for multimedia communication in general. We also identify specific limitations associated with the current validation strategy which have implications on analysis and comparison of ML-based quality predictors. These include a lack of consideration of: (a) data uncertainty, (b) domain knowledge, (c) explicit learning ability of the trained model, and (d) interpretability of the resultant model. Therefore, the primary goal of this article is to shed some light into mentioned factors. Our analysis and proposed recommendations are of particular importance in the light of significant interests in ML methods for multimedia signal processing (specifically in cases where human-labeled data is used), and a lack of discussion of mentioned issues in existing literature.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1044
Author(s):  
Yassine Bouabdallaoui ◽  
Zoubeir Lafhaj ◽  
Pascal Yim ◽  
Laure Ducoulombier ◽  
Belkacem Bennadji

The operation and maintenance of buildings has seen several advances in recent years. Multiple information and communication technology (ICT) solutions have been introduced to better manage building maintenance. However, maintenance practices in buildings remain less efficient and lead to significant energy waste. In this paper, a predictive maintenance framework based on machine learning techniques is proposed. This framework aims to provide guidelines to implement predictive maintenance for building installations. The framework is organised into five steps: data collection, data processing, model development, fault notification and model improvement. A sport facility was selected as a case study in this work to demonstrate the framework. Data were collected from different heating ventilation and air conditioning (HVAC) installations using Internet of Things (IoT) devices and a building automation system (BAS). Then, a deep learning model was used to predict failures. The case study showed the potential of this framework to predict failures. However, multiple obstacles and barriers were observed related to data availability and feedback collection. The overall results of this paper can help to provide guidelines for scientists and practitioners to implement predictive maintenance approaches in buildings.


Sign in / Sign up

Export Citation Format

Share Document