External validation of prognostic models among cancer patients undergoing emergency colorectal surgery

2008 ◽  
Vol 195 (4) ◽  
pp. 439-441 ◽  
Author(s):  
Tamer Ertan ◽  
Omer Yoldas ◽  
Yusuf Alper Kılıc ◽  
Mehmet Kılıc ◽  
Erdal Göcmen ◽  
...  
2021 ◽  
Vol 9 ◽  
Author(s):  
Bingjie He ◽  
Weiye Chen ◽  
Lili Liu ◽  
Zheng Hou ◽  
Haiyan Zhu ◽  
...  

Objective: This work aims to systematically identify, describe, and appraise all prognostic models for cervical cancer and provide a reference for clinical practice and future research.Methods: We systematically searched PubMed, EMBASE, and Cochrane library databases up to December 2020 and included studies developing, validating, or updating a prognostic model for cervical cancer. Two reviewers extracted information based on the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modeling Studies checklist and assessed the risk of bias using the Prediction model Risk Of Bias ASsessment Tool.Results: Fifty-six eligible articles were identified, describing the development of 77 prognostic models and 27 external validation efforts. The 77 prognostic models focused on three types of cervical cancer patients at different stages, i.e., patients with early-stage cervical cancer (n = 29; 38%), patients with locally advanced cervical cancer (n = 27; 35%), and all-stage cervical cancer patients (n = 21; 27%). Among the 77 models, the most frequently used predictors were lymph node status (n = 57; 74%), the International Federation of Gynecology and Obstetrics stage (n = 42; 55%), histological types (n = 38; 49%), and tumor size (n = 37; 48%). The number of models that applied internal validation, presented a full equation, and assessed model calibration was 52 (68%), 16 (21%), and 45 (58%), respectively. Twenty-four models were externally validated, among which three were validated twice. None of the models were assessed with an overall low risk of bias. The Prediction Model of Failure in Locally Advanced Cervical Cancer model was externally validated twice, with acceptable performance, and seemed to be the most reliable.Conclusions: Methodological details including internal validation, sample size, and handling of missing data need to be emphasized on, and external validation is needed to facilitate the application and generalization of models for cervical cancer.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 671
Author(s):  
Margherita Rimini ◽  
Pierfrancesco Franco ◽  
Berardino De Bari ◽  
Maria Giulia Zampino ◽  
Stefano Vagge ◽  
...  

Anal squamous cell carcinoma (SCC) is a rare tumor, and bio-humoral predictors of response to chemo-radiation (CT-RT) are lacking. We developed a prognostic score system based on laboratory inflammation parameters. We investigated the correlation between baseline clinical and laboratory variables and disease-free (DFS) and overall (OS) survival in anal SCC patients treated with CT-RT in five institutions. The bio-humoral parameters of significance were included in a new scoring system, which was tested with other significant variables in a Cox’s proportional hazard model. A total of 308 patients was included. We devised a prognostic model by combining baseline hemoglobin level, SII, and eosinophil count: the Hemo-Eosinophils Inflammation (HEI) Index. We stratified patients according to the HEI index into low- and high-risk groups. Median DFS for low-risk patients was not reached, and it was found to be 79.5 months for high-risk cases (Hazard Ratio 3.22; 95% CI: 2.04–5.10; p < 0.0001). Following adjustment for clinical covariates found significant at univariate analysis, multivariate analysis confirmed the HEI index as an independent prognostic factor for DFS and OS. The HEI index was shown to be a prognostic parameter for DFS and OS in anal cancer patients treated with CT-RT. An external validation of the HEI index is mandatory for its use in clinical practice.


2021 ◽  
Vol 108 (Supplement_2) ◽  
Author(s):  
C Li ◽  
S Z Y Ooi ◽  
T Woo ◽  
P H M Chan

Abstract Aim To identify the most relevant clinical factors in the National Bowel Cancer Audit (NBOCA) that contribute to the variation in the quality of care provided in different hospitals for colorectal cancer patients undergoing surgery. Method Data from 36,116 patients with colorectal cancer who had undergone surgery were retrospectively collected from the NBOCA and analysed from 145 and 146 hospitals over two years. A validated multiple linear regression was performed to compare the identified clinical factors with various quality outcomes. The quality outcomes defined in this study were the length of hospitalisation, 2-year mortality, readmission rate, 90-day mortality, and 18-month stoma rate. Results Four clinical factors (laparoscopy rate, abdominal-perineal-resection-of-rectum (APER), pre-operative radiotherapy and patients with distant metastases) were shown to have a significant (p &lt; 0.05) impact on the length of hospitalisation and 18-month stoma rate. 18-month stoma rate was also significantly associated with 2-year mortality. External validation of the regression model demonstrated the Root-Mean-Square-Error of 0.811 and 4.62 for 18-month stoma rate and 2-year mortality respectively. Conclusions Hospitals should monitor the four clinical factors for patients with colorectal cancer during perioperative care. Clinicians should consider these factors along with the individual patients’ history when formulating a management plan for patients with colorectal cancer.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii203-ii203
Author(s):  
Alexander Hulsbergen ◽  
Yu Tung Lo ◽  
Vasileios Kavouridis ◽  
John Phillips ◽  
Timothy Smith ◽  
...  

Abstract INTRODUCTION Survival prediction in brain metastases (BMs) remains challenging. Current prognostic models have been created and validated almost completely with data from patients receiving radiotherapy only, leaving uncertainty about surgical patients. Therefore, the aim of this study was to build and validate a model predicting 6-month survival after BM resection using different machine learning (ML) algorithms. METHODS An institutional database of 1062 patients who underwent resection for BM was split into a 80:20 training and testing set. Seven different ML algorithms were trained and assessed for performance. Moreover, an ensemble model was created incorporating random forest, adaptive boosting, gradient boosting, and logistic regression algorithms. Five-fold cross validation was used for hyperparameter tuning. Model performance was assessed using area under the receiver-operating curve (AUC) and calibration and was compared against the diagnosis-specific graded prognostic assessment (ds-GPA); the most established prognostic model in BMs. RESULTS The ensemble model showed superior performance with an AUC of 0.81 in the hold-out test set, a calibration slope of 1.14, and a calibration intercept of -0.08, outperforming the ds-GPA (AUC 0.68). Patients were stratified into high-, medium- and low-risk groups for death at 6 months; these strata strongly predicted both 6-months and longitudinal overall survival (p &lt; 0.001). CONCLUSIONS We developed and internally validated an ensemble ML model that accurately predicts 6-month survival after neurosurgical resection for BM, outperforms the most established model in the literature, and allows for meaningful risk stratification. Future efforts should focus on external validation of our model.


The Prostate ◽  
2016 ◽  
Vol 77 (1) ◽  
pp. 105-113 ◽  
Author(s):  
Sami-Ramzi Leyh-Bannurah ◽  
Stéphanie Gazdovich ◽  
Lars Budäus ◽  
Emanuele Zaffuto ◽  
Paolo Dell'Oglio ◽  
...  

2020 ◽  
Author(s):  
Jenna Marie Reps ◽  
Ross Williams ◽  
Seng Chan You ◽  
Thomas Falconer ◽  
Evan Minty ◽  
...  

Abstract Objective: To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets.Materials & Methods: Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites. Results: The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57-0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation.Discussion: This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. Conclusion : In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.


2021 ◽  
Vol 28 (1) ◽  
pp. e100267
Author(s):  
Keerthi Harish ◽  
Ben Zhang ◽  
Peter Stella ◽  
Kevin Hauck ◽  
Marwa M Moussa ◽  
...  

ObjectivesPredictive studies play important roles in the development of models informing care for patients with COVID-19. Our concern is that studies producing ill-performing models may lead to inappropriate clinical decision-making. Thus, our objective is to summarise and characterise performance of prognostic models for COVID-19 on external data.MethodsWe performed a validation of parsimonious prognostic models for patients with COVID-19 from a literature search for published and preprint articles. Ten models meeting inclusion criteria were either (a) externally validated with our data against the model variables and weights or (b) rebuilt using original features if no weights were provided. Nine studies had internally or externally validated models on cohorts of between 18 and 320 inpatients with COVID-19. One model used cross-validation. Our external validation cohort consisted of 4444 patients with COVID-19 hospitalised between 1 March and 27 May 2020.ResultsMost models failed validation when applied to our institution’s data. Included studies reported an average validation area under the receiver–operator curve (AUROC) of 0.828. Models applied with reported features averaged an AUROC of 0.66 when validated on our data. Models rebuilt with the same features averaged an AUROC of 0.755 when validated on our data. In both cases, models did not validate against their studies’ reported AUROC values.DiscussionPublished and preprint prognostic models for patients infected with COVID-19 performed substantially worse when applied to external data. Further inquiry is required to elucidate mechanisms underlying performance deviations.ConclusionsClinicians should employ caution when applying models for clinical prediction without careful validation on local data.


Sign in / Sign up

Export Citation Format

Share Document