scholarly journals A systematic review of the quality of clinical prediction models in in vitro fertilisation

2020 ◽  
Vol 35 (1) ◽  
pp. 100-116 ◽  
Author(s):  
M B Ratna ◽  
S Bhattacharya ◽  
B Abdulrahim ◽  
D J McLernon

Abstract STUDY QUESTION What are the best-quality clinical prediction models in IVF (including ICSI) treatment to inform clinicians and their patients of their chance of success? SUMMARY ANSWER The review recommends the McLernon post-treatment model for predicting the cumulative chance of live birth over and up to six complete cycles of IVF. WHAT IS KNOWN ALREADY Prediction models in IVF have not found widespread use in routine clinical practice. This could be due to their limited predictive accuracy and clinical utility. A previous systematic review of IVF prediction models, published a decade ago and which has never been updated, did not assess the methodological quality of existing models nor provided recommendations for the best-quality models for use in clinical practice. STUDY DESIGN, SIZE, DURATION The electronic databases OVID MEDLINE, OVID EMBASE and Cochrane library were searched systematically for primary articles published from 1978 to January 2019 using search terms on the development and/or validation (internal and external) of models in predicting pregnancy or live birth. No language or any other restrictions were applied. PARTICIPANTS/MATERIALS, SETTING, METHODS The PRISMA flowchart was used for the inclusion of studies after screening. All studies reporting on the development and/or validation of IVF prediction models were included. Articles reporting on women who had any treatment elements involving donor eggs or sperm and surrogacy were excluded. The CHARMS checklist was used to extract and critically appraise the methodological quality of the included articles. We evaluated models’ performance by assessing their c-statistics and plots of calibration in studies and assessed correct reporting by calculating the percentage of the TRIPOD 22 checklist items met in each study. MAIN RESULTS AND THE ROLE OF CHANCE We identified 33 publications reporting on 35 prediction models. Seventeen articles had been published since the last systematic review. The quality of models has improved over time with regard to clinical relevance, methodological rigour and utility. The percentage of TRIPOD score for all included studies ranged from 29 to 95%, and the c-statistics of all externally validated studies ranged between 0.55 and 0.77. Most of the models predicted the chance of pregnancy/live birth for a single fresh cycle. Six models aimed to predict the chance of pregnancy/live birth per individual treatment cycle, and three predicted more clinically relevant outcomes such as cumulative pregnancy/live birth. The McLernon (pre- and post-treatment) models predict the cumulative chance of live birth over multiple complete cycles of IVF per woman where a complete cycle includes all fresh and frozen embryo transfers from the same episode of ovarian stimulation. McLernon models were developed using national UK data and had the highest TRIPOD score, and the post-treatment model performed best on external validation. LIMITATIONS, REASONS FOR CAUTION To assess the reporting quality of all included studies, we used the TRIPOD checklist, but many of the earlier IVF prediction models were developed and validated before the formal TRIPOD reporting was published in 2015. It should also be noted that two of the authors of this systematic review are authors of the McLernon model article. However, we feel we have conducted our review and made our recommendations using a fair and transparent systematic approach. WIDER IMPLICATIONS OF THE FINDINGS This study provides a comprehensive picture of the evolving quality of IVF prediction models. Clinicians should use the most appropriate model to suit their patients’ needs. We recommend the McLernon post-treatment model as a counselling tool to inform couples of their predicted chance of success over and up to six complete cycles. However, it requires further external validation to assess applicability in countries with different IVF practices and policies. STUDY FUNDING/COMPETING INTEREST(S) The study was funded by the Elphinstone Scholarship Scheme and the Assisted Reproduction Unit, University of Aberdeen. Both D.J.M. and S.B. are authors of the McLernon model article and S.B. is Editor in Chief of Human Reproduction Open. They have completed and submitted the ICMJE forms for Disclosure of potential Conflicts of Interest. The other co-authors have no conflicts of interest to declare. REGISTRATION NUMBER N/A

2020 ◽  
Author(s):  
Fernanda Gonçalves Silva ◽  
Leonardo Oliveira Pena Costa ◽  
Mark J Hancock ◽  
Gabriele Alves Palomo ◽  
Luciola da Cunha Menezes Costa ◽  
...  

Abstract Background: The prognosis of acute low back pain is generally favourable in terms of pain and disability; however, outcomes vary substantially between individual patients. Clinical prediction models help in estimating the likelihood of an outcome at a certain time point. There are existing clinical prediction models focused on prognosis for patients with low back pain. To date, there is only one previous systematic review summarising the discrimination of validated clinical prediction models to identify the prognosis in patients with low back pain of less than 3 months duration. The aim of this systematic review is to identify existing developed and/or validated clinical prediction models on prognosis of patients with low back pain of less than 3 months duration, and to summarise their performance in terms of discrimination and calibration. Methods: MEDLINE, Embase and CINAHL databases will be searched, from the inception of these databases until January 2020. Eligibility criteria will be: (1) prognostic model development studies with or without external validation, or prognostic external validation studies with or without model updating; (2) with adults aged 18 or over, with ‘recent onset’ low back pain (i.e. less than 3 months duration), with or without leg pain; (3) outcomes of pain, disability, sick leave or days absent from work or return to work status, and self-reported recovery; and (4) study with a follow-up of at least 12 weeks duration. The risk of bias of the included studies will be assessed by the Prediction model Risk Of Bias ASsessment Tool, and the overall quality of evidence will be rated using the Hierarchy of Evidence for Clinical Prediction Rules. Discussion: This systematic review will identify, appraise, and summarize evidence on the performance of existing prediction models for prognosis of low back pain, and may help clinicians to choose the best option of prediction model to better inform patients about their likely prognosis. Systematic review registration: PROSPERO reference number CRD42020160988


2021 ◽  
pp. postgradmedj-2020-139352
Author(s):  
Simon Allan ◽  
Raphael Olaiya ◽  
Rasan Burhan

Cardiovascular disease (CVD) is one of the leading causes of death across the world. CVD can lead to angina, heart attacks, heart failure, strokes, and eventually, death; among many other serious conditions. The early intervention with those at a higher risk of developing CVD, typically with statin treatment, leads to better health outcomes. For this reason, clinical prediction models (CPMs) have been developed to identify those at a high risk of developing CVD so that treatment can begin at an earlier stage. Currently, CPMs are built around statistical analysis of factors linked to developing CVD, such as body mass index and family history. The emerging field of machine learning (ML) in healthcare, using computer algorithms that learn from a dataset without explicit programming, has the potential to outperform the CPMs available today. ML has already shown exciting progress in the detection of skin malignancies, bone fractures and many other medical conditions. In this review, we will analyse and explain the CPMs currently in use with comparisons to their developing ML counterparts. We have found that although the newest non-ML CPMs are effective, ML-based approaches consistently outperform them. However, improvements to the literature need to be made before ML should be implemented over current CPMs.


Endocrine ◽  
2021 ◽  
Author(s):  
Olivier Zanier ◽  
Matteo Zoli ◽  
Victor E. Staartjes ◽  
Federica Guaraldi ◽  
Sofia Asioli ◽  
...  

Abstract Purpose Biochemical remission (BR), gross total resection (GTR), and intraoperative cerebrospinal fluid (CSF) leaks are important metrics in transsphenoidal surgery for acromegaly, and prediction of their likelihood using machine learning would be clinically advantageous. We aim to develop and externally validate clinical prediction models for outcomes after transsphenoidal surgery for acromegaly. Methods Using data from two registries, we develop and externally validate machine learning models for GTR, BR, and CSF leaks after endoscopic transsphenoidal surgery in acromegalic patients. For the model development a registry from Bologna, Italy was used. External validation was then performed using data from Zurich, Switzerland. Gender, age, prior surgery, as well as Hardy and Knosp classification were used as input features. Discrimination and calibration metrics were assessed. Results The derivation cohort consisted of 307 patients (43.3% male; mean [SD] age, 47.2 [12.7] years). GTR was achieved in 226 (73.6%) and BR in 245 (79.8%) patients. In the external validation cohort with 46 patients, 31 (75.6%) achieved GTR and 31 (77.5%) achieved BR. Area under the curve (AUC) at external validation was 0.75 (95% confidence interval: 0.59–0.88) for GTR, 0.63 (0.40–0.82) for BR, as well as 0.77 (0.62–0.91) for intraoperative CSF leaks. While prior surgery was the most important variable for prediction of GTR, age, and Hardy grading contributed most to the predictions of BR and CSF leaks, respectively. Conclusions Gross total resection, biochemical remission, and CSF leaks remain hard to predict, but machine learning offers potential in helping to tailor surgical therapy. We demonstrate the feasibility of developing and externally validating clinical prediction models for these outcomes after surgery for acromegaly and lay the groundwork for development of a multicenter model with more robust generalization.


2021 ◽  
Author(s):  
Thomas Stojanov ◽  
Linda Modler ◽  
Andreas M. Müller ◽  
Soheila Aghlmandi ◽  
Christian Appenzeller-Herzog ◽  
...  

Abstract BackgroundPost-operative shoulder stiffness (POSS) is one of the most frequent complications after arthroscopic rotator cuff repair (ARCR). Factors specifying clinical prediction models for the occurrence of POSS should rely on the literature and expert assessment. Our objective was to map prognostic factors for the occurrence of POSS in patients after an ARCR.MethodsLongitudinal studies of ARCR reporting prognostic factors for the occurrence of POSS with an endpoint of at least 6 months were included. We systematically searched Embase, Medline, and Scopus for articles published between January 1, 2014 and February 12, 2020 and screened cited and citing literature of eligible records and identified reviews. The risk of bias of included studies and the quality of evidence were assessed using the Quality in Prognosis Studies tool and an adapted Grading of Recommendations, Assessment, Development and Evaluations framework. A database was implemented to report the results of individual studies. The review was registered on PROSPERO (CRD42020199257).ResultsSeven cohort studies including 23 257 patients were included after screening 5013 records. POSS prevalence ranged from 0.51% to 8.75% with an endpoint ranging from 6 to 24 months. Due to scarcity of data, no meta-analysis could be performed. Overall risk of bias and quality of evidence was deemed high and low or very low, respectively. Twenty-two potential prognostic factors were identified. Increased age and male sex emerged as protective factors against POSS. Additional factors were reported but do require further analyses to determine their prognostic value.DiscussionAvailable evidence pointed to male sex and increased age as probable protective factors against POSS after ARCR. To establish a reliable pre-specified set of factors for clinical prediction models, our review results require complementation with an expert's opinion.


Circulation ◽  
2018 ◽  
Vol 138 (Suppl_1) ◽  
Author(s):  
Jenica N Upshaw ◽  
Jason Nelson ◽  
Benjamin Wessler ◽  
Benjamin Koethe ◽  
Christine Lundquist ◽  
...  

Introduction: Most heart failure (HF) clinical prediction models (CPMs] have not been independently externally validated. We sought to test the performance of HF models in a diverse population using a systematic approach. Methods: A systematic review identified CPMs predicting outcomes for patients with HF. Individual patient data from 5 large publicaly available clinical trials enrolling patients with chronic HF were matched to published CPMs based on similarity in populations and available outcome and predictor variables in the clinical trial databases. CPM performance was evaluated for discrimination (c-statistic, % relative change in c-statistic) and calibration (Harrell’s E and E 90 , the mean and the 90% quantile of the error distribution from the smoothed loess observed value) for the original and recalibrated models. Results: Out of 135 HF CPMs reviewed, we identified 45 CPM-trial pairs including 13 unique CPMs. The outcome was mortality for all of the models with a trial match. During external validations, median c-statistic was 0.595 (IQR 0.563 to 0.630) with a median relative decrease in the c-statistic of -57 % (IQR, -49% to -71%) compared to the c-statistic reported in the derivation cohort. Overall, the median Harrell’s E was 0.09 (IQR, 0.04 to 0.135) and E 90 was 0.11 (IQR, 0.07 to 0.21). Recalibration of the intercept and slope led to substantially improved calibration with median change in Harrell’s E of -35% [IQR 0 to -75%] for the intercept and -56% [IQR -17% to -75%] for the intercept and slope. Refitting model covariates improved the median c-statistic by 38% to 0.629 [IQR 0.613 to 0.649]. Conclusion: For HF CPMs, independent external validations demonstrate that CPMs perform significantly worse than originally presented; however with significant heterogeneity. Recalibration of the intercept and slope improved model calibration. These results underscore the need to carefully consider the derivation cohort characteristics when using published CPMs.


Sign in / Sign up

Export Citation Format

Share Document