scholarly journals Prediction of transition to psychosis in patients with a clinical high risk for psychosis: a systematic review of methodology and reporting

2017 ◽  
Vol 47 (7) ◽  
pp. 1163-1178 ◽  
Author(s):  
E. Studerus ◽  
A. Ramyead ◽  
A. Riecher-Rössler

BackgroundTo enhance indicated prevention in patients with a clinical high risk (CHR) for psychosis, recent research efforts have been increasingly directed towards estimating the risk of developing psychosis on an individual level using multivariable clinical prediction models. The aim of this study was to systematically review the methodological quality and reporting of studies developing or validating such models.MethodA systematic literature search was carried out (up to 14 March 2016) to find all studies that developed or validated a clinical prediction model predicting the transition to psychosis in CHR patients. Data were extracted using a comprehensive item list which was based on current methodological recommendations.ResultsA total of 91 studies met the inclusion criteria. None of the retrieved studies performed a true external validation of an existing model. Only three studies (3.5%) had an event per variable ratio of at least 10, which is the recommended minimum to avoid overfitting. Internal validation was performed in only 14 studies (15%) and seven of these used biased internal validation strategies. Other frequently observed modeling approaches not recommended by methodologists included univariable screening of candidate predictors, stepwise variable selection, categorization of continuous variables, and poor handling and reporting of missing data.ConclusionsOur systematic review revealed that poor methods and reporting are widespread in prediction of psychosis research. Since most studies relied on small sample sizes, did not perform internal or external cross-validation, and used poor model development strategies, most published models are probably overfitted and their reported predictive accuracy is likely to be overoptimistic.

2020 ◽  
Vol 35 (1) ◽  
pp. 100-116 ◽  
Author(s):  
M B Ratna ◽  
S Bhattacharya ◽  
B Abdulrahim ◽  
D J McLernon

Abstract STUDY QUESTION What are the best-quality clinical prediction models in IVF (including ICSI) treatment to inform clinicians and their patients of their chance of success? SUMMARY ANSWER The review recommends the McLernon post-treatment model for predicting the cumulative chance of live birth over and up to six complete cycles of IVF. WHAT IS KNOWN ALREADY Prediction models in IVF have not found widespread use in routine clinical practice. This could be due to their limited predictive accuracy and clinical utility. A previous systematic review of IVF prediction models, published a decade ago and which has never been updated, did not assess the methodological quality of existing models nor provided recommendations for the best-quality models for use in clinical practice. STUDY DESIGN, SIZE, DURATION The electronic databases OVID MEDLINE, OVID EMBASE and Cochrane library were searched systematically for primary articles published from 1978 to January 2019 using search terms on the development and/or validation (internal and external) of models in predicting pregnancy or live birth. No language or any other restrictions were applied. PARTICIPANTS/MATERIALS, SETTING, METHODS The PRISMA flowchart was used for the inclusion of studies after screening. All studies reporting on the development and/or validation of IVF prediction models were included. Articles reporting on women who had any treatment elements involving donor eggs or sperm and surrogacy were excluded. The CHARMS checklist was used to extract and critically appraise the methodological quality of the included articles. We evaluated models’ performance by assessing their c-statistics and plots of calibration in studies and assessed correct reporting by calculating the percentage of the TRIPOD 22 checklist items met in each study. MAIN RESULTS AND THE ROLE OF CHANCE We identified 33 publications reporting on 35 prediction models. Seventeen articles had been published since the last systematic review. The quality of models has improved over time with regard to clinical relevance, methodological rigour and utility. The percentage of TRIPOD score for all included studies ranged from 29 to 95%, and the c-statistics of all externally validated studies ranged between 0.55 and 0.77. Most of the models predicted the chance of pregnancy/live birth for a single fresh cycle. Six models aimed to predict the chance of pregnancy/live birth per individual treatment cycle, and three predicted more clinically relevant outcomes such as cumulative pregnancy/live birth. The McLernon (pre- and post-treatment) models predict the cumulative chance of live birth over multiple complete cycles of IVF per woman where a complete cycle includes all fresh and frozen embryo transfers from the same episode of ovarian stimulation. McLernon models were developed using national UK data and had the highest TRIPOD score, and the post-treatment model performed best on external validation. LIMITATIONS, REASONS FOR CAUTION To assess the reporting quality of all included studies, we used the TRIPOD checklist, but many of the earlier IVF prediction models were developed and validated before the formal TRIPOD reporting was published in 2015. It should also be noted that two of the authors of this systematic review are authors of the McLernon model article. However, we feel we have conducted our review and made our recommendations using a fair and transparent systematic approach. WIDER IMPLICATIONS OF THE FINDINGS This study provides a comprehensive picture of the evolving quality of IVF prediction models. Clinicians should use the most appropriate model to suit their patients’ needs. We recommend the McLernon post-treatment model as a counselling tool to inform couples of their predicted chance of success over and up to six complete cycles. However, it requires further external validation to assess applicability in countries with different IVF practices and policies. STUDY FUNDING/COMPETING INTEREST(S) The study was funded by the Elphinstone Scholarship Scheme and the Assisted Reproduction Unit, University of Aberdeen. Both D.J.M. and S.B. are authors of the McLernon model article and S.B. is Editor in Chief of Human Reproduction Open. They have completed and submitted the ICMJE forms for Disclosure of potential Conflicts of Interest. The other co-authors have no conflicts of interest to declare. REGISTRATION NUMBER N/A


2019 ◽  
Vol 58 ◽  
pp. 72-79 ◽  
Author(s):  
Magdalena Kotlicka-Antczak ◽  
Michał S. Karbownik ◽  
Konrad Stawiski ◽  
Agnieszka Pawełczyk ◽  
Natalia Żurner ◽  
...  

AbstractObjective:The predictive accuracy of the Clinical High Risk criteria for Psychosis (CHR-P) regarding the future development of the disorder remains suboptimal. It is therefore necessary to incorporate refined risk estimation tools which can be applied at the individual subject level. The aim of the study was to develop an easy-to use, short refined risk estimation tool to predict the development of psychosis in a new CHR-P cohort recruited in European country with less established early detection services.Methods:A cohort of 105 CHR-P individuals was assessed with the Comprehensive Assessment of At Risk Mental States12/2006, and then followed for a median period of 36 months (25th-75th percentile:10–59 months) for transition to psychosis. A multivariate Cox regression model predicting transition was generated with preselected clinical predictors and was internally validated with 1000 bootstrap resamples.Results:Speech disorganization and unusual thought content were selected as potential predictors of conversion on the basis of published literature. The prediction model was significant (p < 0.0001) and confirmed that both speech disorganization (HR = 1.69; 95%CI: 1.39–2.05) and unusual thought content (HR = 1.51; 95%CI: 1.27–1.80) were significantly associated with transition. The prognostic accuracy of the model was adequate (Harrell’s c- index = 0.79), even after optimism correction through internal validation procedures (Harrell’s c-index = 0.78).Conclusions:The clinical prediction model developed, and internally validated, herein to predict transition from a CHR-P to psychosis may be a promising tool for use in clinical settings. It has been incorporated into an online tool available at:https://link.konsta.com.pl/psychosis. Future external replication studies are needed.


Author(s):  
Huayu Zhang ◽  
Ting Shi ◽  
Xiaodong Wu ◽  
Xin Zhang ◽  
Kun Wang ◽  
...  

AbstractBackgroundAccurate risk prediction of clinical outcome would usefully inform clinical decisions and intervention targeting in COVID-19. The aim of this study was to derive and validate risk prediction models for poor outcome and death in adult inpatients with COVID-19.MethodsModel derivation using data from Wuhan, China used logistic regression with death and poor outcome (death or severe disease) as outcomes. Predictors were demographic, comorbidity, symptom and laboratory test variables. The best performing models were externally validated in data from London, UK.Findings4.3% of the derivation cohort (n=775) died and 9.7% had a poor outcome, compared to 34.1% and 42.9% of the validation cohort (n=226). In derivation, prediction models based on age, sex, neutrophil count, lymphocyte count, platelet count, C-reactive protein and creatinine had excellent discrimination (death c-index=0.91, poor outcome c-index=0.88), with good-to-excellent calibration. Using two cut-offs to define low, high and very-high risk groups, derivation patients were stratified in groups with observed death rates of 0.34%, 15.0% and 28.3% and poor outcome rates 0.63%, 8.9% and 58.5%. External validation discrimination was good (c-index death=0.74, poor outcome=0.72) as was calibration. However, observed rates of death were 16.5%, 42.9% and 58.4% and poor outcome 26.3%, 28.4% and 64.8% in predicted low, high and very-high risk groups.InterpretationOur prediction model using demography and routinely-available laboratory tests performed very well in internal validation in the lower-risk derivation population, but less well in the much higher-risk external validation population. Further external validation is needed. Collaboration to create larger derivation datasets, and to rapidly externally validate all proposed prediction models in a range of populations is needed, before routine implementation of any risk prediction tool in clinical care.FundingMRC, Wellcome Trust, HDR-UK, LifeArc, participating hospitals, NNSFC, National Key R&D Program, Pudong Health and Family Planning CommissionResearch in contextEvidence before this studySeveral prognostic models for predicting mortality risk, progression to severe disease, or length of hospital stay in COVID-19 have been published.1 Commonly reported predictors of severe prognosis in patients with COVID-19 include age, sex, computed tomography scan features, C-reactive protein (CRP), lactic dehydrogenase, and lymphocyte count. Symptoms (notably dyspnoea) and comorbidities (e.g. chronic lung disease, cardiovascular disease and hypertension) are also reported to have associations with poor prognosis.2 However, most studies have not described the study population or intended use of prediction models, and external validation is rare and to date done using datasets originating from different Wuhan hospitals.3 Given different patterns of testing and organisation of healthcare pathways, external validation in datasets from other countries is required.Added value of this studyThis study used data from Wuhan, China to derive and internally validate multivariable models to predict poor outcome and death in COVID-19 patients after hospital admission, with external validation using data from King’s College Hospital, London, UK. Mortality and poor outcome occurred in 4.3% and 9.7% of patients in Wuhan, compared to 34.1% and 42.9% of patients in London. Models based on age, sex and simple routinely available laboratory tests (lymphocyte count, neutrophil count, platelet count, CRP and creatinine) had good discrimination and calibration in internal validation, but performed only moderately well in external validation. Models based on age, sex, symptoms and comorbidity were adequate in internal validation for poor outcome (ICU admission or death) but had poor performance for death alone.Implications of all the available evidenceThis study and others find that relatively simple risk prediction models using demographic, clinical and laboratory data perform well in internal validation but at best moderately in external validation, either because derivation and external validation populations are small (Xie et al3) and/or because they vary greatly in casemix and severity (our study). There are three decision points where risk prediction may be most useful: (1) deciding who to test; (2) deciding which patients in the community are at high-risk of poor outcomes; and (3) identifying patients at high-risk at the point of hospital admission. Larger studies focusing on particular decision points, with rapid external validation in multiple datasets are needed. A key gap is risk prediction tools for use in community triage (decisions to admit, or to keep at home with varying intensities of follow-up including telemonitoring) or in low income settings where laboratory tests may not be routinely available at the point of decision-making. This requires systematic data collection in community and low-income settings to derive and evaluate appropriate models.


2020 ◽  
Author(s):  
Fernanda Gonçalves Silva ◽  
Leonardo Oliveira Pena Costa ◽  
Mark J Hancock ◽  
Gabriele Alves Palomo ◽  
Luciola da Cunha Menezes Costa ◽  
...  

Abstract Background: The prognosis of acute low back pain is generally favourable in terms of pain and disability; however, outcomes vary substantially between individual patients. Clinical prediction models help in estimating the likelihood of an outcome at a certain time point. There are existing clinical prediction models focused on prognosis for patients with low back pain. To date, there is only one previous systematic review summarising the discrimination of validated clinical prediction models to identify the prognosis in patients with low back pain of less than 3 months duration. The aim of this systematic review is to identify existing developed and/or validated clinical prediction models on prognosis of patients with low back pain of less than 3 months duration, and to summarise their performance in terms of discrimination and calibration. Methods: MEDLINE, Embase and CINAHL databases will be searched, from the inception of these databases until January 2020. Eligibility criteria will be: (1) prognostic model development studies with or without external validation, or prognostic external validation studies with or without model updating; (2) with adults aged 18 or over, with ‘recent onset’ low back pain (i.e. less than 3 months duration), with or without leg pain; (3) outcomes of pain, disability, sick leave or days absent from work or return to work status, and self-reported recovery; and (4) study with a follow-up of at least 12 weeks duration. The risk of bias of the included studies will be assessed by the Prediction model Risk Of Bias ASsessment Tool, and the overall quality of evidence will be rated using the Hierarchy of Evidence for Clinical Prediction Rules. Discussion: This systematic review will identify, appraise, and summarize evidence on the performance of existing prediction models for prognosis of low back pain, and may help clinicians to choose the best option of prediction model to better inform patients about their likely prognosis. Systematic review registration: PROSPERO reference number CRD42020160988


2021 ◽  
Author(s):  
Cynthia Yang ◽  
Jan A. Kors ◽  
Solomon Ioannou ◽  
Luis H. John ◽  
Aniek F. Markus ◽  
...  

Objectives This systematic review aims to provide further insights into the conduct and reporting of clinical prediction model development and validation over time. We focus on assessing the reporting of information necessary to enable external validation by other investigators. Materials and Methods We searched Embase, Medline, Web-of-Science, Cochrane Library and Google Scholar to identify studies that developed one or more multivariable prognostic prediction models using electronic health record (EHR) data published in the period 2009-2019. Results We identified 422 studies that developed a total of 579 clinical prediction models using EHR data. We observed a steep increase over the years in the number of developed models. The percentage of models externally validated in the same paper remained at around 10%. Throughout 2009-2019, for both the target population and the outcome definitions, code lists were provided for less than 20% of the models. For about half of the models that were developed using regression analysis, the final model was not completely presented. Discussion Overall, we observed limited improvement over time in the conduct and reporting of clinical prediction model development and validation. In particular, the prediction problem definition was often not clearly reported, and the final model was often not completely presented. Conclusion Improvement in the reporting of information necessary to enable external validation by other investigators is still urgently needed to increase clinical adoption of developed models.


2021 ◽  
Vol 10 (1) ◽  
pp. 93
Author(s):  
Mahdieh Montazeri ◽  
Ali Afraz ◽  
Mitra Montazeri ◽  
Sadegh Nejatzadeh ◽  
Fatemeh Rahimi ◽  
...  

Introduction: Our aim in this study was to summarize information on the use of intelligent models for predicting and diagnosing the Coronavirus disease 2019 (COVID-19) to help early and timely diagnosis of the disease.Material and Methods: A systematic literature search included articles published until 20 April 2020 in PubMed, Web of Science, IEEE, ProQuest, Scopus, bioRxiv, and medRxiv databases. The search strategy consisted of two groups of keywords: A) Novel coronavirus, B) Machine learning. Two reviewers independently assessed original papers to determine eligibility for inclusion in this review. Studies were critically reviewed for risk of bias using prediction model risk of bias assessment tool.Results: We gathered 1650 articles through database searches. After the full-text assessment 31 articles were included. Neural networks and deep neural network variants were the most popular machine learning type. Of the five models that authors claimed were externally validated, we considered external validation only for four of them. Area under the curve (AUC) in internal validation of prognostic models varied from .94 to .97. AUC in diagnostic models varied from 0.84 to 0.99, and AUC in external validation of diagnostic models varied from 0.73 to 0.94. Our analysis finds all but two studies have a high risk of bias due to various reasons like a low number of participants and lack of external validation.Conclusion: Diagnostic and prognostic models for COVID-19 show good to excellent discriminative performance. However, these models are at high risk of bias because of various reasons like a low number of participants and lack of external validation. Future studies should address these concerns. Sharing data and experiences for the development, validation, and updating of COVID-19 related prediction models is needed. 


Author(s):  
Julio Vaquerizo-Serrano ◽  
Gonzalo Salazar de Pablo ◽  
Jatinder Singh ◽  
Paramala Santosh

AbstractPsychotic experiences can occur in autism spectrum disorders (ASD). Some of the ASD individuals with these experiences may fulfil Clinical High-Risk for Psychosis (CHR-P) criteria. A systematic literature search was performed to review the information on ASD and CHR-P. A meta-analysis of the proportion of CHR-P in ASD was conducted. The systematic review included 13 studies. The mean age of ASD individuals across the included studies was 11.09 years. The Attenuated Psychosis Syndrome subgroup was the most frequently reported. Four studies were meta-analysed, showing that 11.6% of CHR-P individuals have an ASD diagnosis. Symptoms of prodromal psychosis may be present in individuals with ASD. The transition from CHR-P to psychosis is not affected by ASD.


Sign in / Sign up

Export Citation Format

Share Document