clinical prediction model
Recently Published Documents


TOTAL DOCUMENTS

255
(FIVE YEARS 115)

H-INDEX

27
(FIVE YEARS 5)

2022 ◽  
Vol 9 ◽  
Author(s):  
Wenle Li ◽  
Shengtao Dong ◽  
Bing Wang ◽  
Haosheng Wang ◽  
Chan Xu ◽  
...  

Background: This study aimed to construct a clinical prediction model for osteosarcoma patients to evaluate the influence factors for the occurrence of lymph node metastasis (LNM).Methods: In our retrospective study, a total of 1,256 patients diagnosed with chondrosarcoma were enrolled from the SEER (Surveillance, Epidemiology, and End Results) database (training cohort, n = 1,144) and multicenter dataset (validation cohort, n = 112). Both the univariate and multivariable logistic regression analysis were performed to identify the potential risk factors of LNM in osteosarcoma patients. According to the results of multivariable logistic regression analysis, A nomogram were established and the predictive ability was assessed by calibration plots, receiver operating characteristics (ROCs) curve, and decision curve analysis (DCA). Moreover, Kaplan-Meier plot of overall survival (OS) was plot and a web calculator visualized the nomogram.Results: Five independent risk factors [chemotherapy, surgery, lung metastases, lymphatic metastases (M-stage) and tumor size (T-stage)] were identified by multivariable logistic regression analysis. What's more, calibration plots displayed great power both in training and validation group. DCA presented great clinical utility. ROCs curve provided the predictive ability in the training cohort (AUC = 0.805) and the validation cohort (AUC = 0.808). Moreover, patients in LNN group had significantly better survival than that in LNP group both in training and validation group.Conclusion: In this study, we constructed and developed a nomogram with risk factors, which performed well in predicting risk factors of LNM in osteosarcoma patients. It may give a guide for surgeons and oncologists to optimize individual treatment and make a better clinical decision.


2022 ◽  
Vol 104-B (1) ◽  
pp. 97-102
Author(s):  
Yasukazu Hijikata ◽  
Tsukasa Kamitani ◽  
Masayuki Nakahara ◽  
Shinji Kumamoto ◽  
Tsubasa Sakai ◽  
...  

Aims To develop and internally validate a preoperative clinical prediction model for acute adjacent vertebral fracture (AVF) after vertebral augmentation to support preoperative decision-making, named the after vertebral augmentation (AVA) score. Methods In this prognostic study, a multicentre, retrospective single-level vertebral augmentation cohort of 377 patients from six Japanese hospitals was used to derive an AVF prediction model. Backward stepwise selection (p < 0.05) was used to select preoperative clinical and imaging predictors for acute AVF after vertebral augmentation for up to one month, from 14 predictors. We assigned a score to each selected variable based on the regression coefficient and developed the AVA scoring system. We evaluated sensitivity and specificity for each cut-off, area under the curve (AUC), and calibration as diagnostic performance. Internal validation was conducted using bootstrapping to correct the optimism. Results Of the 377 patients used for model derivation, 58 (15%) had an acute AVF postoperatively. The following preoperative measures on multivariable analysis were summarized in the five-point AVA score: intravertebral instability (≥ 5 mm), focal kyphosis (≥ 10°), duration of symptoms (≥ 30 days), intravertebral cleft, and previous history of vertebral fracture. Internal validation showed a mean optimism of 0.019 with a corrected AUC of 0.77. A cut-off of ≤ one point was chosen to classify a low risk of AVF, for which only four of 137 patients (3%) had AVF with 92.5% sensitivity and 45.6% specificity. A cut-off of ≥ four points was chosen to classify a high risk of AVF, for which 22 of 38 (58%) had AVF with 41.5% sensitivity and 94.5% specificity. Conclusion In this study, the AVA score was found to be a simple preoperative method for the identification of patients at low and high risk of postoperative acute AVF. This model could be applied to individual patients and could aid in the decision-making before vertebral augmentation. Cite this article: Bone Joint J 2022;104-B(1):97–102.


2021 ◽  
Author(s):  
Pui San Tan ◽  
Ashley Clift ◽  
Weiqi Liao ◽  
Martina Patone ◽  
Carol Coupland ◽  
...  

Background Pancreatic cancer continues to have an extremely poor prognosis in part due to late diagnosis. 25% of pancreatic cancer patients have a prior diagnosis of diabetes, and hence identifying individuals at risk of pancreatic cancer in those with recently diagnosed type 2 diabetes may be a useful opportunity to identify candidates for screening and early detection. In this study, we will comparatively evaluate regression and machine learning-based clinical prediction models for estimating individual risk of developing pancreatic cancer two years after type 2 diabetes diagnosis. Methods In the development dataset, we will include adults aged 30-84 years with incident type-2 diabetes registered with QResearch primary care database. Patients will be followed up from type-2 diabetes diagnosis to first diagnosis of pancreatic cancer as recorded in any one of primary care records, hospital episode statistics, cancer registry data, or death records. Cox-proportional hazards models will be used to develop a risk prediction model for estimating individual risk of developing pancreatic cancer during up to 2 years of follow-up. We will perform variable selection using a combination of clinical and statistical significance approach i.e. HR <0.9 or >1.1 and p<0.01. Linear predictors and baseline survivor function at 2 years will be used to compute absolute risk predictions. Internal-external cross-validation (IECV) framework across geographical regions within England will be used to assess performance and pooled using random effects meta-analysis using: (i) model fit in terms of variation explained by the model Royston & Sauerbrei's R2D, (ii) calibration slope and calibration-in-the-large, and (iii) discrimination measured in terms of Harrell's C and Royston & Sauerbrei's D-statistic. Further, we will evaluate machine learning (ML) approaches for the clinical prediction model using neural networks (NN) and XGBoost. The model predictors and performance of these will be compared with the results of those derived from the regression-based strategy. Discussion The proposed study will develop and validate a novel risk prediction model to aid early diagnosis of pancreatic cancer in patients with new-onset diabetes in primary care. With an enhanced decision-risk tool for use at point-of care by general practitioners to assess pancreatic cancer risk, it may improve decision-making so that at-risk patients are rapidly prioritised to aid early diagnosis of pancreatic cancer in patients with newly diagnosed diabetes.


BioDrugs ◽  
2021 ◽  
Author(s):  
Pavine L. C. Lefevre ◽  
Parambir S. Dulai ◽  
Zhongya Wang ◽  
Leonardo Guizzetti ◽  
Brian G. Feagan ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Sharmala Thuraisingam ◽  
Patty Chondros ◽  
Michelle M. Dowsey ◽  
Tim Spelman ◽  
Stephanie Garies ◽  
...  

Abstract Background The use of general practice electronic health records (EHRs) for research purposes is in its infancy in Australia. Given these data were collected for clinical purposes, questions remain around data quality and whether these data are suitable for use in prediction model development. In this study we assess the quality of data recorded in 201,462 patient EHRs from 483 Australian general practices to determine its usefulness in the development of a clinical prediction model for total knee replacement (TKR) surgery in patients with osteoarthritis (OA). Methods Variables to be used in model development were assessed for completeness and plausibility. Accuracy for the outcome and competing risk were assessed through record level linkage with two gold standard national registries, Australian Orthopaedic Association National Joint Replacement Registry (AOANJRR) and National Death Index (NDI). The validity of the EHR data was tested using participant characteristics from the 2014–15 Australian National Health Survey (NHS). Results There were substantial missing data for body mass index and weight gain between early adulthood and middle age. TKR and death were recorded with good accuracy, however, year of TKR, year of death and side of TKR were poorly recorded. Patient characteristics recorded in the EHR were comparable to participant characteristics from the NHS, except for OA medication and metastatic solid tumour. Conclusions In this study, data relating to the outcome, competing risk and two predictors were unfit for prediction model development. This study highlights the need for more accurate and complete recording of patient data within EHRs if these data are to be used to develop clinical prediction models. Data linkage with other gold standard data sets/registries may in the meantime help overcome some of the current data quality challenges in general practice EHRs when developing prediction models.


2021 ◽  
Author(s):  
Cynthia Yang ◽  
Jan A. Kors ◽  
Solomon Ioannou ◽  
Luis H. John ◽  
Aniek F. Markus ◽  
...  

Objectives This systematic review aims to provide further insights into the conduct and reporting of clinical prediction model development and validation over time. We focus on assessing the reporting of information necessary to enable external validation by other investigators. Materials and Methods We searched Embase, Medline, Web-of-Science, Cochrane Library and Google Scholar to identify studies that developed one or more multivariable prognostic prediction models using electronic health record (EHR) data published in the period 2009-2019. Results We identified 422 studies that developed a total of 579 clinical prediction models using EHR data. We observed a steep increase over the years in the number of developed models. The percentage of models externally validated in the same paper remained at around 10%. Throughout 2009-2019, for both the target population and the outcome definitions, code lists were provided for less than 20% of the models. For about half of the models that were developed using regression analysis, the final model was not completely presented. Discussion Overall, we observed limited improvement over time in the conduct and reporting of clinical prediction model development and validation. In particular, the prediction problem definition was often not clearly reported, and the final model was often not completely presented. Conclusion Improvement in the reporting of information necessary to enable external validation by other investigators is still urgently needed to increase clinical adoption of developed models.


Sign in / Sign up

Export Citation Format

Share Document