scholarly journals Developing clinical prediction models when adhering to minimum sample size recommendations: The importance of quantifying bootstrap variability in tuning parameters and predictive performance

2021 ◽  
pp. 096228022110463
Author(s):  
Glen P Martin ◽  
Richard D Riley ◽  
Gary S Collins ◽  
Matthew Sperrin

Recent minimum sample size formula (Riley et al.) for developing clinical prediction models help ensure that development datasets are of sufficient size to minimise overfitting. While these criteria are known to avoid excessive overfitting on average, the extent of variability in overfitting at recommended sample sizes is unknown. We investigated this through a simulation study and empirical example to develop logistic regression clinical prediction models using unpenalised maximum likelihood estimation, and various post-estimation shrinkage or penalisation methods. While the mean calibration slope was close to the ideal value of one for all methods, penalisation further reduced the level of overfitting, on average, compared to unpenalised methods. This came at the cost of higher variability in predictive performance for penalisation methods in external data. We recommend that penalisation methods are used in data that meet, or surpass, minimum sample size requirements to further mitigate overfitting, and that the variability in predictive performance and any tuning parameters should always be examined as part of the model development process, since this provides additional information over average (optimism-adjusted) performance alone. Lower variability would give reassurance that the developed clinical prediction model will perform well in new individuals from the same population as was used for model development.

2018 ◽  
Vol 22 (66) ◽  
pp. 1-294 ◽  
Author(s):  
Rachel Archer ◽  
Emma Hock ◽  
Jean Hamilton ◽  
John Stevens ◽  
Munira Essat ◽  
...  

Background Rheumatoid arthritis (RA) is a chronic, debilitating disease associated with reduced quality of life and substantial costs. It is unclear which tests and assessment tools allow the best assessment of prognosis in people with early RA and whether or not variables predict the response of patients to different drug treatments. Objective To systematically review evidence on the use of selected tests and assessment tools in patients with early RA (1) in the evaluation of a prognosis (review 1) and (2) as predictive markers of treatment response (review 2). Data sources Electronic databases (e.g. MEDLINE, EMBASE, The Cochrane Library, Web of Science Conference Proceedings; searched to September 2016), registers, key websites, hand-searching of reference lists of included studies and key systematic reviews and contact with experts. Study selection Review 1 – primary studies on the development, external validation and impact of clinical prediction models for selected outcomes in adult early RA patients. Review 2 – primary studies on the interaction between selected baseline covariates and treatment (conventional and biological disease-modifying antirheumatic drugs) on salient outcomes in adult early RA patients. Results Review 1 – 22 model development studies and one combined model development/external validation study reporting 39 clinical prediction models were included. Five external validation studies evaluating eight clinical prediction models for radiographic joint damage were also included. c-statistics from internal validation ranged from 0.63 to 0.87 for radiographic progression (different definitions, six studies) and 0.78 to 0.82 for the Health Assessment Questionnaire (HAQ). Predictive performance in external validations varied considerably. Three models [(1) Active controlled Study of Patients receiving Infliximab for the treatment of Rheumatoid arthritis of Early onset (ASPIRE) C-reactive protein (ASPIRE CRP), (2) ASPIRE erythrocyte sedimentation rate (ASPIRE ESR) and (3) Behandelings Strategie (BeSt)] were externally validated using the same outcome definition in more than one population. Results of the random-effects meta-analysis suggested substantial uncertainty in the expected predictive performance of models in a new sample of patients. Review 2 – 12 studies were identified. Covariates examined included anti-citrullinated protein/peptide anti-body (ACPA) status, smoking status, erosions, rheumatoid factor status, C-reactive protein level, erythrocyte sedimentation rate, swollen joint count (SJC), body mass index and vascularity of synovium on power Doppler ultrasound (PDUS). Outcomes examined included erosions/radiographic progression, disease activity, physical function and Disease Activity Score-28 remission. There was statistical evidence to suggest that ACPA status, SJC and PDUS status at baseline may be treatment effect modifiers, but not necessarily that they are prognostic of response for all treatments. Most of the results were subject to considerable uncertainty and were not statistically significant. Limitations The meta-analysis in review 1 was limited by the availability of only a small number of external validation studies. Studies rarely investigated the interaction between predictors and treatment. Suggested research priorities Collaborative research (including the use of individual participant data) is needed to further develop and externally validate the clinical prediction models. The clinical prediction models should be validated with respect to individual treatments. Future assessments of treatment by covariate interactions should follow good statistical practice. Conclusions Review 1 – uncertainty remains over the optimal prediction model(s) for use in clinical practice. Review 2 – in general, there was insufficient evidence that the effect of treatment depended on baseline characteristics. Study registration This study is registered as PROSPERO CRD42016042402. Funding The National Institute for Health Research Health Technology Assessment programme.


2021 ◽  
Author(s):  
Richard D. Riley ◽  
Thomas P. A. Debray ◽  
Gary S. Collins ◽  
Lucinda Archer ◽  
Joie Ensor ◽  
...  

Neurosurgery ◽  
2019 ◽  
Vol 85 (3) ◽  
pp. 302-311 ◽  
Author(s):  
Hendrik-Jan Mijderwijk ◽  
Ewout W Steyerberg ◽  
Hans-Jakob Steiger ◽  
Igor Fischer ◽  
Marcel A Kamp

AbstractClinical prediction models in neurosurgery are increasingly reported. These models aim to provide an evidence-based approach to the estimation of the probability of a neurosurgical outcome by combining 2 or more prognostic variables. Model development and model reporting are often suboptimal. A basic understanding of the methodology of clinical prediction modeling is needed when interpreting these models. We address basic statistical background, 7 modeling steps, and requirements of these models such that they may fulfill their potential for major impact for our daily clinical practice and for future scientific work.


2020 ◽  
Vol 40 (1) ◽  
pp. 133-146 ◽  
Author(s):  
Lucinda Archer ◽  
Kym I. E. Snell ◽  
Joie Ensor ◽  
Mohammed T. Hudda ◽  
Gary S. Collins ◽  
...  

2021 ◽  
Author(s):  
Steven J. Staffa ◽  
David Zurakowski

Summary Clinical prediction models in anesthesia and surgery research have many clinical applications including preoperative risk stratification with implications for clinical utility in decision-making, resource utilization, and costs. It is imperative that predictive algorithms and multivariable models are validated in a suitable and comprehensive way in order to establish the robustness of the model in terms of accuracy, predictive ability, reliability, and generalizability. The purpose of this article is to educate anesthesia researchers at an introductory level on important statistical concepts involved with development and validation of multivariable prediction models for a binary outcome. Methods covered include assessments of discrimination and calibration through internal and external validation. An anesthesia research publication is examined to illustrate the process and presentation of multivariable prediction model development and validation for a binary outcome. Properly assessing the statistical and clinical validity of a multivariable prediction model is essential for reassuring the generalizability and reproducibility of the published tool.


2018 ◽  
Vol 28 (8) ◽  
pp. 2455-2474 ◽  
Author(s):  
Maarten van Smeden ◽  
Karel GM Moons ◽  
Joris AH de Groot ◽  
Gary S Collins ◽  
Douglas G Altman ◽  
...  

Binary logistic regression is one of the most frequently applied statistical approaches for developing clinical prediction models. Developers of such models often rely on an Events Per Variable criterion (EPV), notably EPV ≥10, to determine the minimal sample size required and the maximum number of candidate predictors that can be examined. We present an extensive simulation study in which we studied the influence of EPV, events fraction, number of candidate predictors, the correlations and distributions of candidate predictor variables, area under the ROC curve, and predictor effects on out-of-sample predictive performance of prediction models. The out-of-sample performance (calibration, discrimination and probability prediction error) of developed prediction models was studied before and after regression shrinkage and variable selection. The results indicate that EPV does not have a strong relation with metrics of predictive performance, and is not an appropriate criterion for (binary) prediction model development studies. We show that out-of-sample predictive performance can better be approximated by considering the number of predictors, the total sample size and the events fraction. We propose that the development of new sample size criteria for prediction models should be based on these three parameters, and provide suggestions for improving sample size determination.


Sign in / Sign up

Export Citation Format

Share Document