scholarly journals Empirical evaluation of fundamental principles of evidence-based medicine: a meta-epidemiological study

Author(s):  
Benjamin Djulbegovic ◽  
Muhammad Muneeb Ahmed ◽  
Iztok Hozo ◽  
Despina Koletsi ◽  
Lars Hemkens ◽  
...  

Rationale, aims and objectives 39 Evidence-based medicine (EBM) holds that estimates of effects of health interventions based on 40 high-certainty evidence (CoE) are expected to change less frequently than the effects generated 41 in low CoE studies. However, this foundational principle of EBM has never been empirically 42 tested. 43 Methods 44 We reviewed all systematic reviews and meta-analyses in Cochrane Database of Systematic 45 Reviews from January 2016 through May 2021 (n=3,323). We identified 414(207x2) and 384 46 (192x2) pairs of original and updated Cochrane reviews that assessed CoE and pooled 47 treatment effect estimates. We appraised CoE using the Grading of Recommendations 48 Assessment, Development and Evaluation (GRADE) method. We assessed the difference in 49 effect sizes between the original versus updated reviews as a function of change in CoE, which 50 we report as a ratio of odds ratio (ROR). We compared ROR generated in the studies that 51 changed CoE from very low/low (VL/L) to moderate/high (M/H) vs. MH/H VL/L. We also 52 assessed the heterogeneity and inconsistency (using the tau and I2 statistic), the change in 53 precision of effect estimates (by calculating the ratio of standard errors) (seR), and the absolute 54 deviation in estimates of treatment effects (aROR). 55 Results 56 57 We found that CoE originally appraised as VL/L had 2.1 (95%CI: 1.19 to 4.12; p=0.0091) times 58 higher odds to be changed in the future studies than M/H CoE. However, the effect size was not 59 different when CoE changed from VL/L M/H vs. M/H VL/L [ROR=1.02 (95%CI: 0.74 to 1.39) 60 vs. 1.02 (95%CI: 0.44 to 2.37); p=1 for the between subgroup differences]. aROR was similar 61 between the subgroups [median (IQR):1.12 (1.07 to 1.57) vs 1.21 (1.12 to 2.43)]. We observed 62 large inconsistency (I 2=99%) and imprecision in treatment effects (seR=1.09). 63 Conclusions 64 We provide the first empirical support for a foundational principle of EBM showing that low65 quality evidence changes more often than high CoE. However, the effect size was not different 66 between studies with low vs high CoE. The finding that the effect size did not differ between low 67 and high CoE indicate urgent need to refine current EBM critical appraisal methods

2021 ◽  
Author(s):  
Sergey Roussakow

Abstract BACKGROUND: Evidence-based medicine (EBM) is in crisis, in part due to bad methods, which are understood as misuse of statistics that is considered correct in itself. The correctness of the basic statistics related to the effect size (ES) based on correlation (CBES) was questioned. METHODS: Monte Carlo simulation of two paired binary samples, mathematical analysis, conceptual analysis, bias analysis. RESULTS: Actual effect size and CBES are not related. CBES is a fallacy based on misunderstanding of correlation and ES and confusion with 2 × 2 tables that makes no distinction between gross crosstabs (GCTs) and contingency tables (CTs). This leads to misapplication of Pearson’s Phi, designed for CTs, to GCTs and confusion of the resulting gross Pearson Phi, or mean-square effect half-size, with the implied Pearson mean square contingency coefficient. Generalizing this binary fallacy to continuous data and the correlation in general (Pearson’s r) resulted in flawed equations directly expressing ES in terms of the correlation coefficient, which is impossible without including covariance, so these equations and the whole CBES concept are fundamentally wrong. misconception of contingency tables (MCT) is a series of related misconceptions due to confusion with 2 × 2 tables and misapplication of related statistics. Problems arising from these fallacies are discussed and the necessary changes to the corpus of statistics are proposed resolving the problem of correlation and ES in paired binary data. CONCLUSIONS: Two related common misconceptions in statistics have been exposed, CBES and MCT. The misconceptions are threatening because most of the findings from contingency tables, including meta-analyses, can be misleading. Since exposing these fallacies casts doubt on the reliability of the statistical foundations of EBM in general, we urgently need to revise them.


2012 ◽  
Vol 101 (4) ◽  
pp. 352-353 ◽  
Author(s):  
Christiane Willhelm ◽  
Wolfgang Girisch ◽  
Ludwig Gortner ◽  
Sascha Meyer

2012 ◽  
Vol 21 (2) ◽  
pp. 151-153 ◽  
Author(s):  
A. Cipriani ◽  
C. Barbui ◽  
C. Rizzo ◽  
G. Salanti

Standard meta-analyses are an effective tool in evidence-based medicine, but one of their main drawbacks is that they can compare only two alternative treatments at a time. Moreover, if no trials exist which directly compare two interventions, it is not possible to estimate their relative efficacy. Multiple treatments meta-analyses use a meta-analytical technique that allows the incorporation of evidence from both direct and indirect comparisons from a network of trials of different interventions to estimate summary treatment effects as comprehensively and precisely as possible.


2018 ◽  
Vol 23 (1) ◽  
pp. 20-21 ◽  
Author(s):  
David Nunan ◽  
Carl Heneghan ◽  
Elizabeth A Spencer

This article is part of a series of articles featuring the Catalogue of Bias introduced in this volume of BMJ Evidence-Based Medicine that describes allocation bias and outlines its potential impact on research studies and the preventive steps to minimise its risk. Allocation bias is a type of selection bias and is relevant to clinical trials of interventions. Knowledge of interventions prior to group allocation can result in systematic differences in important characteristics that could influence study findings. Allocation bias can overestimate effect size by up to 30%–40%. Sequentially numbered, opaque, sealed envelopes; containers; pharmacy-controlled randomisation and central computer randomisation are methods to minimise allocation bias.


2010 ◽  
Vol 37 (2) ◽  
pp. 153-156 ◽  
Author(s):  
Glória Maria de Oliveira ◽  
Fábio Trinca Camargo ◽  
Eduardo Costa Gonçalves ◽  
Carlos Vinicius Nascimento Duarte ◽  
Carlos Alberto Guimarães

Este artigo tem o objetivo de realizar uma revisão narrativa sobre revisão sistemática da acurácia dos testes diagnósticos. Foi realizada busca na Cochrane Methodology Reviews (Cochrane Reviews of Diagnostic Test Accuracy), Medline e LILACS, bem como busca manual das listas de referências dos artigos incluídos na revisão. As estratégias de busca empregadas foram as seguintes, empregando-se títulos de assuntos e termos livres: 1. na Cochrane Methodology Reviews: accuracy study "Methodology" 2. Na Pubmed "Meta-Analysis "[Publication Type] AND "Evidence-Based Medicine"[Mesh]) AND "Sensitivity and Specificity"[Mesh]; 3. Na LILACS: (revisao sistematica) or "literatura de REVISAO como assunto" [Descritor de assunto] and (sistematica) or "SISTEMATICA" [Descritor de assunto] and (acuracia) or "SENSIBILIDADE e especificidade" [Descritor de assunto]. Em suma, a preparação e o planejamento metodológicos das revisões sistemáticas de testes diagnósticos é ulterior àqueles empregados nas revisões sistemáticas das intervenções terapêuticas. Há muitas fontes de heterogeneidade nos desenhos dos estudos de teste diagnóstico, o que dificulta muito a síntese - metanálise - dos seus resultados. Para contornar esse problema, existem atualmente normas, exigidas pelas principais revistas biomédicas, para a submissão de um manuscrito sobre testes diagnósticos.


Author(s):  
Sergey Roussakow

Evidence-based medicine (EBM) is in crisis, in part due to bad methods, which are understood as misuse of statistics that is considered correct in itself. This article exposes two related common misconceptions in statistics, the effect size (ES) based on correlation (CBES) and a misconception of contingency tables (MCT). CBES is a fallacy based on misunderstanding of correlation and ES and confusion with 2 × 2 tables, which makes no distinction between gross crosstabs (GCTs) and contingency tables (CTs). This leads to misapplication of Pearson’s Phi, designed for CTs, to GCTs and confusion of the resulting gross Pearson Phi, or mean-square effect half-size, with the implied Pearson mean square contingency coefficient. Generalizing this binary fallacy to continuous data and the correlation in general (Pearson’s r) resulted in flawed equations directly expressing ES in terms of the correlation coefficient, which is impossible without including covariance, so these equations and the whole CBES concept are fundamentally wrong. MCT is a series of related misconceptions due to confusion with 2 × 2 tables and misapplication of related statistics. The misconceptions are threatening because most of the findings from contingency tables, including CBES-based meta-analyses, can be misleading. Problems arising from these fallacies are discussed and the necessary changes to the corpus of statistics are proposed resolving the problem of correlation and ES in paired binary data. Since exposing these fallacies casts doubt on the reliability of the statistical foundations of EBM in general, we urgently need to revise them.


2017 ◽  
Author(s):  
Herm J. Lamberink ◽  
Willem M. Otte ◽  
Michel R.T. Sinke ◽  
Daniël Lakens ◽  
Paul P. Glasziou ◽  
...  

AbstractBackgroundBiomedical studies with low statistical power are a major concern in the scientific community and are one of the underlying reasons for the reproducibility crisis in science. If randomized clinical trials, which are considered the backbone of evidence-based medicine, also suffer from low power, this could affect medical practice.MethodsWe analysed the statistical power in 137 032 clinical trials between 1975 and 2017 extracted from meta-analyses from the Cochrane database of systematic reviews. We determined study power to detect standardized effect sizes according to Cohen, and in meta-analysis with p-value below 0.05 we based power on the meta-analysed effect size. Average power, effect size and temporal patterns were examined.ResultsThe number of trials with power ≥80% was low but increased over time: from 9% in 1975–1979 to 15% in 2010–2014. This increase was mainly due to increasing sample sizes, whilst effect sizes remained stable with a median Cohen’s h of 0.21 (IQR 0.12-0.36) and a median Cohen’s d of 0.31 (0.19-0.51). The proportion of trials with power of at least 80% to detect a standardized effect size of 0.2 (small), 0.5 (moderate) and 0.8 (large) was 7%, 48% and 81%, respectively.ConclusionsThis study demonstrates that sufficient power in clinical trials is still problematic, although the situation is slowly improving. Our data encourages further efforts to increase statistical power in clinical trials to guarantee rigorous and reproducible evidence-based medicine.


Sign in / Sign up

Export Citation Format

Share Document