P–419 On prognosis after unexplained recurrent pregnancy losses (RPL); a systematic review and external validation of clinical prediction models

2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
A Youssef

Abstract Study question Which models that predict pregnancy outcome in couples with unexplained RPL exist and what is the performance of the most used model? Summary answer We identified seven prediction models; none followed the recommended prediction model development steps. Moreover, the most used model showed poor predictive performance. What is known already RPL remains unexplained in 50–75% of couples For these couples, there is no effective treatment option and clinical management rests on supportive care. Essential part of supportive care consists of counselling on the prognosis of subsequent pregnancies. Indeed, multiple prediction models exist, however the quality and validity of these models varies. In addition, the prediction model developed by Brigham et al is the most widely used model, but has never been externally validated. Study design, size, duration We performed a systematic review to identify prediction models for pregnancy outcome after unexplained RPL. In addition we performed an external validation of the Brigham model in a retrospective cohort, consisting of 668 couples with unexplained RPL that visited our RPL clinic between 2004 and 2019. Participants/materials, setting, methods A systematic search was performed in December 2020 in Pubmed, Embase, Web of Science and Cochrane library to identify relevant studies. Eligible studies were selected and assessed according to the TRIPOD) guidelines, covering topics on model performance and validation statement. The performance of predicting live birth in the Brigham model was evaluated through calibration and discrimination, in which the observed pregnancy rates were compared to the predicted pregnancy rates. Main results and the role of chance Seven models were compared and assessed according to the TRIPOD statement. This resulted in two studies of low, three of moderate and two of above average reporting quality. These studies did not follow the recommended steps for model development and did not calculate a sample size. Furthermore, the predictive performance of neither of these models was internally- or externally validated. We performed an external validation of Brigham model. Calibration showed overestimation of the model and too extreme predictions, with a negative calibration intercept of –0.52 (CI 95% –0.68 – –0.36), with a calibration slope of 0.39 (CI 95% 0.07 – 0.71). The discriminative ability of the model was very low with a concordance statistic of 0.55 (CI 95% 0.50 – 0.59). Limitations, reasons for caution None of the studies are specifically named prediction models, therefore models may have been missed in the selection process. The external validation cohort used a retrospective design, in which only the first pregnancy after intake was registered. Follow-up time was not limited, which is important in counselling unexplained RPL couples. Wider implications of the findings: Currently, there are no suitable models that predict on pregnancy outcome after RPL. Moreover, we are in need of a model with several variables such that prognosis is individualized, and factors from both the female as the male to enable a couple specific prognosis. Trial registration number Not applicable

2021 ◽  
Author(s):  
Wei-Ju Chang ◽  
Justine Naylor ◽  
Pragadesh Natarajan ◽  
Spiro Menounos ◽  
Masiath Monuja ◽  
...  

Abstract Background Prediction models for poor patient-reported surgical outcomes after total hip replacement (THR) and total knee replacement (TKR) may provide a method for improving appropriate surgical care for hip and knee osteoarthritis. There are concerns about methodological issues and the risk of bias of studies producing prediction models. A critical evaluation of the methodological quality of prediction modelling studies in THR and TKR is needed to ensure their clinical usefulness. This systematic review aims to: 1) evaluate and report the quality of risk stratification and prediction modelling studies that predict patient-reported outcomes after THR and TKR; 2) identify areas of methodological deficit and provide recommendations for future research; and 3) synthesise the evidence on prediction models associated with post-operative patient-reported outcomes after THR and TKR surgeries. Methods MEDLINE, EMBASE and CINAHL electronic databases will be searched to identify relevant studies. Title and abstract and full-text screening will be performed by two independent reviewers. We will include: 1) prediction model development studies without external validation; 2) prediction model development studies with external validation of independent data; 3) external model validation studies; and 4) studies updating a previously developed prediction model. Data extraction spreadsheets will be developed based on the CHARMS checklist and TRIPOD statement and piloted on two relevant studies. Study quality and risk of bias will be assessed using the PROBAST tool. Prediction models will be summarised qualitatively. Meta-analyses on the predictive performance of included models will be conducted if appropriate. Discussion This systematic review will evaluate the methodological quality and usefulness of prediction models for poor outcomes after THR or TKR. This information is essential to provide evidence-based healthcare for end-stage hip and knee osteoarthritis. Findings of this review will contribute to the identification of key areas for improvement in conducting prognostic research in this field and facilitate the progress in evidence-based tailored treatments for hip and knee osteoarthritis. Systematic review registration: Submitted to PROSPERO on 30 August 2021.


2021 ◽  
Author(s):  
Cynthia Yang ◽  
Jan A. Kors ◽  
Solomon Ioannou ◽  
Luis H. John ◽  
Aniek F. Markus ◽  
...  

Objectives This systematic review aims to provide further insights into the conduct and reporting of clinical prediction model development and validation over time. We focus on assessing the reporting of information necessary to enable external validation by other investigators. Materials and Methods We searched Embase, Medline, Web-of-Science, Cochrane Library and Google Scholar to identify studies that developed one or more multivariable prognostic prediction models using electronic health record (EHR) data published in the period 2009-2019. Results We identified 422 studies that developed a total of 579 clinical prediction models using EHR data. We observed a steep increase over the years in the number of developed models. The percentage of models externally validated in the same paper remained at around 10%. Throughout 2009-2019, for both the target population and the outcome definitions, code lists were provided for less than 20% of the models. For about half of the models that were developed using regression analysis, the final model was not completely presented. Discussion Overall, we observed limited improvement over time in the conduct and reporting of clinical prediction model development and validation. In particular, the prediction problem definition was often not clearly reported, and the final model was often not completely presented. Conclusion Improvement in the reporting of information necessary to enable external validation by other investigators is still urgently needed to increase clinical adoption of developed models.


2021 ◽  
Author(s):  
Xuecheng Zhang ◽  
Kehua Zhou ◽  
Jingjing Zhang ◽  
Ying Chen ◽  
Hengheng Dai ◽  
...  

Abstract Background Nearly a third of patients with acute heart failure (AHF) die or are readmitted within three months after discharge, accounting for the majority of costs associated with heart failure-related care. A considerable number of risk prediction models, which predict outcomes for mortality and readmission rates, have been developed and validated for patients with AHF. These models could help clinicians stratify patients by risk level and improve decision making, and provide specialist care and resources directed to high-risk patients. However, clinicians sometimes reluctant to utilize these models, possibly due to their poor reliability, the variety of models, and/or the complexity of statistical methodologies. Here, we describe a protocol to systematically review extant risk prediction models. We will describe characteristics, compare performance, and critically appraise the reporting transparency and methodological quality of risk prediction models for AHF patients. Method Embase, Pubmed, Web of Science, and the Cochrane Library will be searched from their inception onwards. A back word will be searched on derivation studies to find relevant external validation studies. Multivariable prognostic models used for AHF and mortality and/or readmission rate will be eligible for review. Two reviewers will conduct title and abstract screening, full-text review, and data extraction independently. Included models will be summarized qualitatively and quantitatively. We will also provide an overview of critical appraisal of the methodological quality and reporting transparency of included studies using the Prediction model Risk of Bias Assessment Tool(PROBAST tool) and the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis(TRIPOD statement). Discussion The result of the systematic review could help clinicians better understand and use the prediction models for AHF patients, as well as make standardized decisions about more precise, risk-adjusted management. Systematic review registration : PROSPERO registration number CRD42021256416.


2021 ◽  
Author(s):  
Jamie L. Miller ◽  
Masafumi Tada ◽  
Michihiko Goto ◽  
Nicholas Mohr ◽  
Sangil Lee

ABSTRACTBackgroundThroughout 2020, the coronavirus disease 2019 (COVID-19) has become a threat to public health on national and global level. There has been an immediate need for research to understand the clinical signs and symptoms of COVID-19 that can help predict deterioration including mechanical ventilation, organ support, and death. Studies thus far have addressed the epidemiology of the disease, common presentations, and susceptibility to acquisition and transmission of the virus; however, an accurate prognostic model for severe manifestations of COVID-19 is still needed because of the limited healthcare resources available.ObjectiveThis systematic review aims to evaluate published reports of prediction models for severe illnesses caused COVID-19.MethodsSearches were developed by the primary author and a medical librarian using an iterative process of gathering and evaluating terms. Comprehensive strategies, including both index and keyword methods, were devised for PubMed and EMBASE. The data of confirmed COVID-19 patients from randomized control studies, cohort studies, and case-control studies published between January 2020 and July 2020 were retrieved. Studies were independently assessed for risk of bias and applicability using the Prediction Model Risk Of Bias Assessment Tool (PROBAST). We collected study type, setting, sample size, type of validation, and outcome including intubation, ventilation, any other type of organ support, or death. The combination of the prediction model, scoring system, performance of predictive models, and geographic locations were summarized.ResultsA primary review found 292 articles relevant based on title and abstract. After further review, 246 were excluded based on the defined inclusion and exclusion criteria. Forty-six articles were included in the qualitative analysis. Inter observer agreement on inclusion was 0.86 (95% confidence interval: 0.79 - 0.93). When the PROBAST tool was applied, 44 of the 46 articles were identified to have high or unclear risk of bias, or high or unclear concern for applicability. Two studied reported prediction models, 4C Mortality Score from hospital data and QCOVID from general public data from UK, and were rated as low risk of bias and low concerns for applicability.ConclusionSeveral prognostic models are reported in the literature, but many of them had concerning risks of biases and applicability. For most of the studies, caution is needed before use, as many of them will require external validation before dissemination. However, two articles were found to have low risk of bias and low applicability can be useful tools.


Pharmacy ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 64 ◽  
Author(s):  
Amanda Brady ◽  
Chris E. Curtis ◽  
Zahraa Jalal

In recent years, a number of studies have examined tools to identify elderly patients who are at increased risk of drug-related problems (DRPs). There has been interest in developing tools to prioritise patients for clinical pharmacist (CP) review. This systematic review (SR) aimed to identify published primary research in this area and critically evaluate the quality of prediction tools to identify elderly patients at increased risk of DRPs and/or likely to need CP intervention. The PubMed, EMBASE, OVID HMIC, Cochrane Library, PsychInfo, CINAHL PLUS, Web of Science and ProQuest databases were searched. Keeping up to date with research and citations, the reference lists of included articles were also searched to identify relevant studies. The studies involved the development, utilisation and/or validation of a prediction tool. The protocol for this SR, CRD42019115673, was registered on PROSPERO. Data were extracted and systematically assessed for quality by considering the four key stages involved in accurate risk prediction models—development, validation, impact and implementation—and following the Checklist for the critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS). Nineteen studies met the inclusion criteria. Variations in study design, participant characteristics and outcomes made meta-analysis unsuitable. The tools varied in complexity. Most studies reported the sensitivity, specificity and/or discriminatory ability of the tool. Only four studies included external validation of the tool(s), namely of the BADRI model and the GerontoNet ADR Risk Score. The BADRI score demonstrated acceptable goodness of fit and good discrimination performance, whilst the GerontoNet ADR Risk Score showed poor reliability in external validation. None of the models met the four key stages required to create a quality risk prediction model. Further research is needed to either refine the tools developed to date or develop new ones that have good performance and have been externally validated before considering the potential impact and implementation of such tools.


2017 ◽  
Vol 145 (9) ◽  
pp. 1738-1749 ◽  
Author(s):  
S. K. KUNUTSOR ◽  
M. R. WHITEHOUSE ◽  
A. W. BLOM ◽  
A. D. BESWICK

SUMMARYAccurate identification of individuals at high risk of surgical site infections (SSIs) or periprosthetic joint infections (PJIs) influences clinical decisions and development of preventive strategies. We aimed to determine progress in the development and validation of risk prediction models for SSI or PJI using a systematic review. We searched for studies that have developed or validated a risk prediction tool for SSI or PJI following joint replacement in MEDLINE, EMBASE, Web of Science and Cochrane databases; trial registers and reference lists of studies up to September 2016. Nine studies describing 16 risk scores for SSI or PJI were identified. The number of component variables in a risk score ranged from 4 to 45. The C-index ranged from 0·56 to 0·74, with only three risk scores reporting a discriminative ability of >0·70. Five risk scores were validated internally. The National Healthcare Safety Network SSIs risk models for hip and knee arthroplasties (HPRO and KPRO) were the only scores to be externally validated. Except for HPRO which shows some promise for use in a clinical setting (based on predictive performance and external validation), none of the identified risk scores can be considered ready for use. Further research is urgently warranted within the field.


2021 ◽  
Author(s):  
Pushpa Singh ◽  
Nicola J Adderley ◽  
Jonathan Hazlehurst ◽  
Malcolm Price ◽  
Abd A Tahrani ◽  
...  

<p>Background</p> <p>Remission of type 2 diabetes following bariatric surgery is well established but identifying patients who will go into remission is challenging. </p> <p>Purpose</p> <p>To perform a systematic review of currently available diabetes remission prediction models, compare their performance, and evaluate their applicability in clinical settings.</p> <p>Data sources</p> <p>A comprehensive systematic literature search of MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE and Cochrane Central Register of Controlled Trials was undertaken. The search was restricted to studies published in the last 15 years and in the English language. </p> <p>Study selection and data extraction</p> <p>All studies developing or validating a prediction model for diabetes remission in adults after bariatric surgery were included. The search identified 4165 references of which 38 were included for data extraction. We identified 16 model development and 22 validation studies. </p> <p>Data synthesis</p> <p>Of the 16 model development studies, 11 developed scoring systems and 5 proposed logistic regression models. In model development studies, 10 models showed excellent discrimination with area under curve (AUC) ≥ 0.800. Two of these prediction models, ABCD and DiaRem, were widely externally validated in different populations, a variety of bariatric procedures, and for both short- and long-term diabetes remission. Newer prediction models showed excellent discrimination in test studies, but external validation was limited.</p> <p>Limitations and Conclusions</p> Amongst the prediction models identified, the ABCD and DiaRem models were the most widely validated and showed acceptable to excellent discrimination. More studies validating newer models and focusing on long-term diabetes remission are needed.


Author(s):  
Jianfeng Xie ◽  
Daniel Hungerford ◽  
Hui Chen ◽  
Simon T Abrams ◽  
Shusheng Li ◽  
...  

SummaryBackgroundCOVID-19 pandemic has developed rapidly and the ability to stratify the most vulnerable patients is vital. However, routinely used severity scoring systems are often low on diagnosis, even in non-survivors. Therefore, clinical prediction models for mortality are urgently required.MethodsWe developed and internally validated a multivariable logistic regression model to predict inpatient mortality in COVID-19 positive patients using data collected retrospectively from Tongji Hospital, Wuhan (299 patients). External validation was conducted using a retrospective cohort from Jinyintan Hospital, Wuhan (145 patients). Nine variables commonly measured in these acute settings were considered for model development, including age, biomarkers and comorbidities. Backwards stepwise selection and bootstrap resampling were used for model development and internal validation. We assessed discrimination via the C statistic, and calibration using calibration-in-the-large, calibration slopes and plots.FindingsThe final model included age, lymphocyte count, lactate dehydrogenase and SpO2 as independent predictors of mortality. Discrimination of the model was excellent in both internal (c=0·89) and external (c=0·98) validation. Internal calibration was excellent (calibration slope=1). External validation showed some over-prediction of risk in low-risk individuals and under-prediction of risk in high-risk individuals prior to recalibration. Recalibration of the intercept and slope led to excellent performance of the model in independent data.InterpretationCOVID-19 is a new disease and behaves differently from common critical illnesses. This study provides a new prediction model to identify patients with lethal COVID-19. Its practical reliance on commonly available parameters should improve usage of limited healthcare resources and patient survival rate.FundingThis study was supported by following funding: Key Research and Development Plan of Jiangsu Province (BE2018743 and BE2019749), National Institute for Health Research (NIHR) (PDF-2018-11-ST2-006), British Heart Foundation (BHF) (PG/16/65/32313) and Liverpool University Hospitals NHS Foundation Trust in UK.Research in contextEvidence before this studySince the outbreak of COVID-19, there has been a pressing need for development of a prognostic tool that is easy for clinicians to use. Recently, a Lancet publication showed that in a cohort of 191 patients with COVID-19, age, SOFA score and D-dimer measurements were associated with mortality. No other publication involving prognostic factors or models has been identified to date.Added value of this studyIn our cohorts of 444 patients from two hospitals, SOFA scores were low in the majority of patients on admission. The relevance of D-dimer could not be verified, as it is not included in routine laboratory tests. In this study, we have established a multivariable clinical prediction model using a development cohort of 299 patients from one hospital. After backwards selection, four variables, including age, lymphocyte count, lactate dehydrogenase and SpO2 remained in the model to predict mortality. This has been validated internally and externally with a cohort of 145 patients from a different hospital. Discrimination of the model was excellent in both internal (c=0·89) and external (c=0·98) validation. Calibration plots showed excellent agreement between predicted and observed probabilities of mortality after recalibration of the model to account for underlying differences in the risk profile of the datasets. This demonstrated that the model is able to make reliable predictions in patients from different hospitals. In addition, these variables agree with pathological mechanisms and the model is easy to use in all types of clinical settings.Implication of all the available evidenceAfter further external validation in different countries the model will enable better risk stratification and more targeted management of patients with COVID-19. With the nomogram, this model that is based on readily available parameters can help clinicians to stratify COVID-19 patients on diagnosis to use limited healthcare resources effectively and improve patient outcome.


2021 ◽  
Author(s):  
Steven J. Staffa ◽  
David Zurakowski

Summary Clinical prediction models in anesthesia and surgery research have many clinical applications including preoperative risk stratification with implications for clinical utility in decision-making, resource utilization, and costs. It is imperative that predictive algorithms and multivariable models are validated in a suitable and comprehensive way in order to establish the robustness of the model in terms of accuracy, predictive ability, reliability, and generalizability. The purpose of this article is to educate anesthesia researchers at an introductory level on important statistical concepts involved with development and validation of multivariable prediction models for a binary outcome. Methods covered include assessments of discrimination and calibration through internal and external validation. An anesthesia research publication is examined to illustrate the process and presentation of multivariable prediction model development and validation for a binary outcome. Properly assessing the statistical and clinical validity of a multivariable prediction model is essential for reassuring the generalizability and reproducibility of the published tool.


BMJ Open ◽  
2019 ◽  
Vol 9 (8) ◽  
pp. e025579 ◽  
Author(s):  
Mohammad Ziaul Islam Chowdhury ◽  
Fahmida Yeasmin ◽  
Doreen M Rabi ◽  
Paul E Ronksley ◽  
Tanvir C Turin

ObjectiveStroke is a major cause of disability and death worldwide. People with diabetes are at a twofold to fivefold increased risk for stroke compared with people without diabetes. This study systematically reviews the literature on available stroke prediction models specifically developed or validated in patients with diabetes and assesses their predictive performance through meta-analysis.DesignSystematic review and meta-analysis.Data sourcesA detailed search was performed in MEDLINE, PubMed and EMBASE (from inception to 22 April 2019) to identify studies describing stroke prediction models.Eligibility criteriaAll studies that developed stroke prediction models in populations with diabetes were included.Data extraction and synthesisTwo reviewers independently identified eligible articles and extracted data. Random effects meta-analysis was used to obtain a pooled C-statistic.ResultsOur search retrieved 26 202 relevant papers and finally yielded 38 stroke prediction models, of which 34 were specifically developed for patients with diabetes and 4 were developed in general populations but validated in patients with diabetes. Among the models developed in those with diabetes, 9 reported their outcome as stroke, 23 reported their outcome as composite cardiovascular disease (CVD) where stroke was a component of the outcome and 2 did not report stroke initially as their outcome but later were validated for stroke as the outcome in other studies. C-statistics varied from 0.60 to 0.92 with a median C-statistic of 0.71 (for stroke as the outcome) and 0.70 (for stroke as part of a composite CVD outcome). Seventeen models were externally validated in diabetes populations with a pooled C-statistic of 0.68.ConclusionsOverall, the performance of these diabetes-specific stroke prediction models was not satisfactory. Research is needed to identify and incorporate new risk factors into the model to improve models’ predictive ability and further external validation of the existing models in diverse population to improve generalisability.


Sign in / Sign up

Export Citation Format

Share Document