Examining Bias and Reporting in Oral Health Prediction Modeling Studies

2020 ◽  
Vol 99 (4) ◽  
pp. 374-387 ◽  
Author(s):  
M. Du ◽  
D. Haag ◽  
Y. Song ◽  
J. Lynch ◽  
M. Mittinty

Recent efforts to improve the reliability and efficiency of scientific research have caught the attention of researchers conducting prediction modeling studies (PMSs). Use of prediction models in oral health has become more common over the past decades for predicting the risk of diseases and treatment outcomes. Risk of bias and insufficient reporting present challenges to the reproducibility and implementation of these models. A recent tool for bias assessment and a reporting guideline—PROBAST (Prediction Model Risk of Bias Assessment Tool) and TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis)—have been proposed to guide researchers in the development and reporting of PMSs, but their application has been limited. Following the standards proposed in these tools and a systematic review approach, a literature search was carried out in PubMed to identify oral health PMSs published in dental, epidemiologic, and biostatistical journals. Risk of bias and transparency of reporting were assessed with PROBAST and TRIPOD. Among 2,881 papers identified, 34 studies containing 58 models were included. The most investigated outcomes were periodontal diseases (42%) and oral cancers (30%). Seventy-five percent of the studies were susceptible to at least 4 of 20 sources of bias, including measurement error in predictors ( n = 12) and/or outcome ( n = 7), omitting samples with missing data ( n = 10), selecting variables based on univariate analyses ( n = 9), overfitting ( n = 13), and lack of model performance assessment ( n = 24). Based on TRIPOD, at least 5 of 31 items were inadequately reported in 95% of the studies. These items included sampling approaches ( n = 15), participant eligibility criteria ( n = 6), and model-building procedures ( n = 16). There was a general lack of transparent reporting and identification of bias across the studies. Application of the recommendations proposed in PROBAST and TRIPOD can benefit future research and improve the reproducibility and applicability of prediction models in oral health.

2021 ◽  
Author(s):  
Esmee Venema ◽  
Benjamin S Wessler ◽  
Jessica K Paulus ◽  
Rehab Salah ◽  
Gowri Raman ◽  
...  

AbstractObjectiveTo assess whether the Prediction model Risk Of Bias ASsessment Tool (PROBAST) and a shorter version of this tool can identify clinical prediction models (CPMs) that perform poorly at external validation.Study Design and SettingWe evaluated risk of bias (ROB) on 102 CPMs from the Tufts CPM Registry, comparing PROBAST to a short form consisting of six PROBAST items anticipated to best identify high ROB. We then applied the short form to all CPMs in the Registry with at least 1 validation and assessed the change in discrimination (dAUC) between the derivation and the validation cohorts (n=1,147).ResultsPROBAST classified 98/102 CPMS as high ROB. The short form identified 96 of these 98 as high ROB (98% sensitivity), with perfect specificity. In the full CPM registry, 529/556 CPMs (95%) were classified as high ROB, 20 (4%) low ROB, and 7 (1%) unclear ROB. Median change in discrimination was significantly smaller in low ROB models (dAUC −0.9%, IQR −6.2%–4.2%) compared to high ROB models (dAUC −11.7%, IQR −33.3%–2.6%; p<0.001).ConclusionHigh ROB is pervasive among published CPMs. It is associated with poor performance at validation, supporting the application of PROBAST or a shorter version in CPM reviews.What is newHigh risk of bias is pervasive among published clinical prediction modelsHigh risk of bias identified with PROBAST is associated with poorer model performance at validationA subset of questions can distinguish between models with high and low risk of bias


2020 ◽  
Vol 4 (s1) ◽  
pp. 34-34
Author(s):  
Lauren Saag Peetluk ◽  
Felipe Ridolfi ◽  
Valeria Rolla ◽  
Timothy Sterling

OBJECTIVES/GOALS: Many clinical prediction models have been developed to guide tuberculosis (TB) treatment, but their results and methods have not been formally evaluated. We aimed to identify and synthesize existing models for predicting TB treatment outcomes, including bias and applicability assessment. METHODS/STUDY POPULATION: Our review will adhere to methods that developed specifically for systematic reviews of prediction model studies. We will search PubMed, Embase, Web of Science, and Google Scholar (first 200 citations) to identify studies that internally and/or externally validate a model for TB treatment outcomes (defined as one or multiple of cure, treatment completion, death, treatment failure, relapse, default, and lost to follow-up). Study screening, data extraction, and bias assessment will be conducted independently by two reviewers with a third party to resolve discrepancies. Study quality will be assessed using the Prediction model Risk Of Bias Assessment Tool (PROBAST). RESULTS/ANTICIPATED RESULTS: Our search strategy yielded 6,242 articles in PubMed, 10,585 in Embase, 10,511 in Web of Science, and 200 from Google Scholar, totaling 27,538 articles. After de-duplication, 14,029 articles remain. After screening titles, abstracts, and full-text, we will extract data from relevant studies, including publication details, study characteristics, methods, and results. Data will be summarized with narrative review and in detailed tables with descriptive statistics. We anticipate finding disparate outcome definitions, contrasting predictors across models, and high risk of bias in methods. Meta-analysis of performance measures for model validation studies will be performed if possible. DISCUSSION/SIGNIFICANCE OF IMPACT: TB outcome prediction models are important but existing ones have not been rigorously evaluated. This systematic review will synthesize TB outcome prediction models and serve as guidance to future studies that aim to use or develop TB outcome prediction models.


2021 ◽  
Vol 6 (1) ◽  
pp. e003451
Author(s):  
Arjun Chandna ◽  
Rainer Tan ◽  
Michael Carter ◽  
Ann Van Den Bruel ◽  
Jan Verbakel ◽  
...  

IntroductionEarly identification of children at risk of severe febrile illness can optimise referral, admission and treatment decisions, particularly in resource-limited settings. We aimed to identify prognostic clinical and laboratory factors that predict progression to severe disease in febrile children presenting from the community.MethodsWe systematically reviewed publications retrieved from MEDLINE, Web of Science and Embase between 31 May 1999 and 30 April 2020, supplemented by hand search of reference lists and consultation with an expert Technical Advisory Panel. Studies evaluating prognostic factors or clinical prediction models in children presenting from the community with febrile illnesses were eligible. The primary outcome was any objective measure of disease severity ascertained within 30 days of enrolment. We calculated unadjusted likelihood ratios (LRs) for comparison of prognostic factors, and compared clinical prediction models using the area under the receiver operating characteristic curves (AUROCs). Risk of bias and applicability of studies were assessed using the Prediction Model Risk of Bias Assessment Tool and the Quality In Prognosis Studies tool.ResultsOf 5949 articles identified, 18 studies evaluating 200 prognostic factors and 25 clinical prediction models in 24 530 children were included. Heterogeneity between studies precluded formal meta-analysis. Malnutrition (positive LR range 1.56–11.13), hypoxia (2.10–8.11), altered consciousness (1.24–14.02), and markers of acidosis (1.36–7.71) and poor peripheral perfusion (1.78–17.38) were the most common predictors of severe disease. Clinical prediction model performance varied widely (AUROC range 0.49–0.97). Concerns regarding applicability were identified and most studies were at high risk of bias.ConclusionsFew studies address this important public health question. We identified prognostic factors from a wide range of geographic contexts that can help clinicians assess febrile children at risk of progressing to severe disease. Multicentre studies that include outpatients are required to explore generalisability and develop data-driven tools to support patient prioritisation and triage at the community level.PROSPERO registration numberCRD42019140542.


2021 ◽  
Author(s):  
Beatriz Garcia Santa Cruz ◽  
Matías Nicolás Bossa ◽  
Jan Sölter ◽  
Andreas Dominik Husch

ABSTRACTComputer-aided-diagnosis for COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. This study provides a systematic evaluation of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias.Only 5 out of 256 identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably almost all of the datasets utilised in 78 papers published in peer-reviewed journals, are not among these 5 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use.This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.


2021 ◽  
Author(s):  
Fariba Tohidinezhad ◽  
Dario Di Perri ◽  
Catharina M.L. Zegers ◽  
Jeanette Dijkstra ◽  
Monique Anten ◽  
...  

Abstract Purpose: Although an increasing body of literature suggests a relationship between brain irradiation and deterioration of neurocognitive function, it remains as the standard therapeutic and prophylactic modality in patients with brain tumors. This review was aimed to abstract and evaluate the prediction models for radiation-induced neurocognitive decline in patients with primary or secondary brain tumors.Methods: MEDLINE was searched on October 31, 2021 for publications containing relevant truncation and MeSH terms related to “radiotherapy”, “brain”, “prediction model”, and “neurocognitive impairments”. Risk of bias was assessed using the Prediction model Risk Of Bias ASsessment Tool.Results: Of 3,580 studies reviewed, 23 prediction models were identified. Age, tumor location, education level, baseline neurocognitive score, and radiation dose to the hippocampus were the most common predictors in the models. The Hopkins verbal learning (n=7) and the trail making tests (n=4) were the most frequent outcome assessment tools. All studies used regression (n=14 linear, n=8 logistic, and n=4 Cox) as machine learning method. All models were judged to have a high risk of bias mainly due to issues in the analysis.Conclusion: Existing models have limited quality and are at high risk of bias. Following recommendations are outlined in this review to improve future models: develop a standardized instrument for neurocognitive assessment in patients with brain tumors; adherence to model development and validation guidelines; careful choice of candidate predictors according to the literature and domain expert consensus; and considering radiation dose to brain substructures as they can provide important information on specific neurocognitive impairments.


2021 ◽  
Author(s):  
Jamie L. Miller ◽  
Masafumi Tada ◽  
Michihiko Goto ◽  
Nicholas Mohr ◽  
Sangil Lee

ABSTRACTBackgroundThroughout 2020, the coronavirus disease 2019 (COVID-19) has become a threat to public health on national and global level. There has been an immediate need for research to understand the clinical signs and symptoms of COVID-19 that can help predict deterioration including mechanical ventilation, organ support, and death. Studies thus far have addressed the epidemiology of the disease, common presentations, and susceptibility to acquisition and transmission of the virus; however, an accurate prognostic model for severe manifestations of COVID-19 is still needed because of the limited healthcare resources available.ObjectiveThis systematic review aims to evaluate published reports of prediction models for severe illnesses caused COVID-19.MethodsSearches were developed by the primary author and a medical librarian using an iterative process of gathering and evaluating terms. Comprehensive strategies, including both index and keyword methods, were devised for PubMed and EMBASE. The data of confirmed COVID-19 patients from randomized control studies, cohort studies, and case-control studies published between January 2020 and July 2020 were retrieved. Studies were independently assessed for risk of bias and applicability using the Prediction Model Risk Of Bias Assessment Tool (PROBAST). We collected study type, setting, sample size, type of validation, and outcome including intubation, ventilation, any other type of organ support, or death. The combination of the prediction model, scoring system, performance of predictive models, and geographic locations were summarized.ResultsA primary review found 292 articles relevant based on title and abstract. After further review, 246 were excluded based on the defined inclusion and exclusion criteria. Forty-six articles were included in the qualitative analysis. Inter observer agreement on inclusion was 0.86 (95% confidence interval: 0.79 - 0.93). When the PROBAST tool was applied, 44 of the 46 articles were identified to have high or unclear risk of bias, or high or unclear concern for applicability. Two studied reported prediction models, 4C Mortality Score from hospital data and QCOVID from general public data from UK, and were rated as low risk of bias and low concerns for applicability.ConclusionSeveral prognostic models are reported in the literature, but many of them had concerning risks of biases and applicability. For most of the studies, caution is needed before use, as many of them will require external validation before dissemination. However, two articles were found to have low risk of bias and low applicability can be useful tools.


Medicina ◽  
2021 ◽  
Vol 57 (6) ◽  
pp. 538
Author(s):  
Alexandru Burlacu ◽  
Adrian Iftene ◽  
Iolanda Valentina Popa ◽  
Radu Crisan-Dabija ◽  
Crischentian Brinza ◽  
...  

Background and objectives: cardiovascular complications (CVC) are the leading cause of death in patients with chronic kidney disease (CKD). Standard cardiovascular disease risk prediction models used in the general population are not validated in patients with CKD. We aim to systematically review the up-to-date literature on reported outcomes of computational methods such as artificial intelligence (AI) or regression-based models to predict CVC in CKD patients. Materials and methods: the electronic databases of MEDLINE/PubMed, EMBASE, and ScienceDirect were systematically searched. The risk of bias and reporting quality for each study were assessed against transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) and the prediction model risk of bias assessment tool (PROBAST). Results: sixteen papers were included in the present systematic review: 15 non-randomized studies and 1 ongoing clinical trial. Twelve studies were found to perform AI or regression-based predictions of CVC in CKD, either through single or composite endpoints. Four studies have come up with computational solutions for other CV-related predictions in the CKD population. Conclusions: the identified studies represent palpable trends in areas of clinical promise with an encouraging present-day performance. However, there is a clear need for more extensive application of rigorous methodologies. Following the future prospective, randomized clinical trials, and thorough external validations, computational solutions will fill the gap in cardiovascular predictive tools for chronic kidney disease.


1989 ◽  
Vol 3 (1) ◽  
pp. 3-6 ◽  
Author(s):  
H. Löe

The celebration of the 40th anniversary of the National Institute of Dental Research (NIDR) provides an opportunity for reviewing the growth of dental research over the decades. The Institute owes its origin to public and professional concern over the dental health of Americans and the prospect that a Federal investment in dental research could pay off. The early years of the Institute were devoted to studies of fluoride and dental caries, with notable achievements in clinical trials of water fluoridation and caries microbiology. During the 1960s came the discovery that the periodontal diseases, like dental caries, were bacterial infections that could be prevented. Basic and clinical research expanded, and the research manpower pool grew with the addition of microbiologists, immunologists, salivary gland investigators, and other basic biomedical and behavioral scientists. The Institute created special broad-based Dental Research Institutes and Centers to foster interdisciplinary research, and continued to expand its research base. A national survey undertaken by NIDR in the late 1970s showed major declines in caries prevalence in schoolchildren. Recent NIDR surveys of adults and older Americans as well as a second children's survey have demonstrated overall improvements in oral health and a continued decline in childhood caries. There remain serious oral health problems among older Americans and among individuals and groups susceptible to disease. NIDR will focus on these high-risk individuals in future research aimed at eliminating edentulousness. At the same time, the Institute will continue the cell and molecular biology studies in the area of development, oncology, bone research, and other basic and clinical fields that mark the emergence of dental research as a major force and contributor to biomedical advances today.


Sign in / Sign up

Export Citation Format

Share Document