scholarly journals Systematic reviews and meta-analyses of preclinical studies: Publication bias in laboratory animal experiments

2011 ◽  
Vol 45 (4) ◽  
pp. 225-230 ◽  
Author(s):  
D A Korevaar ◽  
L Hooft ◽  
G Ter Riet
2017 ◽  
Vol 6 (4) ◽  
pp. 19-37
Author(s):  
Atila Yüksel ◽  
Ekrem Tufan

This article examines whether studies with favorable or statistically significant outcomes are more likely to be published than studies with null results. Should such a publication tendency be in the form of favoring significant findings exist, then the integrity of science, suggestions and conclusions becomes controversial. This also includes those particularly drawn from meta-analyses and systematic reviews. Drawing on a sample of research articles, an examination was undertaken to determine whether studies reporting significant findings were published more. Additional analyses were conducted to examine the validity of reject/support decisions in relation to null hypotheses tested in these studies. The share of the published articles, in which null hypotheses were rejected, was found to be much larger (81%). Interestingly however, calculated power levels and actual samples sizes of these studies were too small to confidently reject/support null hypotheses. Implications for research are discussed in the concluding section of the article.


2005 ◽  
Vol 20 (8) ◽  
pp. 550-553 ◽  
Author(s):  
José Luis R. Martin ◽  
Víctor Pérez ◽  
Montse Sacristán ◽  
Enric Álvarez

AbstractSystematic reviews in mental health have become useful tools for health professionals in view of the massive amount and heterogeneous nature of biomedical information available today. In order to determine the risk of bias in the studies evaluated and to avoid bias in generalizing conclusions from the reviews it is therefore important to use a very strict methodology in systematic reviews. One bias which may affect the generalization of results is publication bias, which is determined by the nature and direction of the study results. To control or minimize this type of bias, the authors of systematic reviews undertake comprehensive searches of medical databases and expand on the findings, often undertaking searches of grey literature (material which is not formally published). This paper attempts to show the consequences (and risk) of generalizing the implications of grey literature in the control of publication bias, as was proposed in a recent systematic work. By repeating the analyses for the same outcome from three different systematic reviews that included both published and grey literature our results showed that confusion between grey literature and publication bias may affect the results of a concrete meta-analysis.


2017 ◽  
Author(s):  
Robbie Cornelis Maria van Aert ◽  
Jelte M. Wicherts ◽  
Marcel A. L. M. van Assen

Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias exists, how strongly it affects different scientific literatures is currently less well-known. We examined evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (CDSR; representing meta-analyses from medicine). Publication bias was assessed on all homogeneous subsets (3.8% of all subsets of meta-analyses published in Psychological Bulletin) of primary studies included in meta-analyses, because publication bias methods do not have good statistical properties if the true effect size is heterogeneous. The Monte-Carlo simulation study revealed that the creation of homogeneous subsets resulted in challenging conditions for publication bias methods since the number of effect sizes in a subset was rather small (median number of effect sizes equaled 6). No evidence of bias was obtained using the publication bias tests. Overestimation was minimal but statistically significant, providing evidence of publication bias that appeared to be similar in both fields. These and other findings, in combination with the small percentages of statistically significant primary effect sizes (28.9% and 18.9% for subsets published in Psychological Bulletin and CDSR), led to the conclusion that evidence for publication bias in the studied homogeneous subsets is weak, but suggestive of mild publication bias in both psychology and medicine.


2019 ◽  
Vol 25 (2) ◽  
pp. 1.1-2
Author(s):  
Kaleb Fuller ◽  
Aaron Bowers ◽  
Matt Vassar

Publication bias can arise in systematic reviews when unpublished data are omitted and lead to inaccurate clinical decision making and adverse clinical outcomes. By conducting searches of clinical trial registries (CTRs), researchers can create more accurate systematic reviews and mitigate the risk of publication bias. The aims of this study are: to evaluate CTR use in systematic reviews and meta-analyses within the minimally invasive surgical oncology (MISO) literature; to conduct a search of ClinicalTrials.gov for a subset of reviews to determine if eligible trials exist that could have been used. This is a cross-sectional study of 197 systematic reviews and meta-analyses retrieved from PubMed. Of 137 included studies, 18 (13.1%) reported searching a CTR. Our ClinicalTrials.gov search revealed that of the 25 randomly selected systematic reviews that failed to conduct a trial registry search, 16 (64.0%) would have identified additional data sources. MISO systematic reviews and meta-analyses do not regularly use CTRs in their data collection, despite eligible trials being freely available.


2006 ◽  
Vol 41 (7) ◽  
pp. 1245-1258 ◽  
Author(s):  
JAIME L. PETERS ◽  
ALEX J. SUTTON ◽  
DAVID R. JONES ◽  
LESLEY RUSHTON ◽  
KEITH R. ABRAMS

Author(s):  
Zahra Bahadoran ◽  
Parvin Mirmiran ◽  
Khosrow Kashfi ◽  
Asghar Ghasemi

Results of animal experiments are used for understanding the pathophysiology of diseases, assessing safety and efficacy of newly developed drugs, and monitoring environmental health hazards among others. Systematic reviews and meta-analyses of animal data are important tools to condense animal evidence and translate the data into practical clinical applications. Such studies are conducted to explore heterogeneity, to generate new hypotheses about pathophysiology and treatment, to design new clinical trial modalities, and to test the efficacy and the safety of the various interventions. Here, we provide an overview regarding the importance of systematic reviews and meta-analyses of animal data and discuss common challenges and their potential solutions. Current evidence highlights various problems and challenges that surround these issues, including lack of generalizability of data obtained from animal models, failure in translating data obtained from animals to humans, poor experimental design and the reporting of the animal studies, heterogeneity of the data collected, and methodologic weaknesses of animal systematic reviews and meta-analyses. Systematic reviews and meta-analyses of animal studies can catalyze translational processes more effectively if they focus on a well-defined hypothesis along with addressing clear inclusion and exclusion criteria, publication bias, heterogeneity of the data, and a coherent and well-balanced assessment of studies' quality.


2019 ◽  
Vol 2 (2) ◽  
pp. p1
Author(s):  
Ilija Barukčić

Objective. Under certain circumstances, the results of multiple investigations – particularly, rigorously-designed trials, can be summarized by systematic reviews and meta-analyses. However, the results of properly conducted meta-analyses can but need not be stronger than single investigations, if (publication) bias is not considered to a necessary extent. Methods. In assessing the significance of publication bias due to study design simple to handle statistical measures for quantifying publication bias are developed and discussed which can be used as a characteristic of a meta-analysis. In addition, these measures may permit comparisons of publication biases between different meta-analyses. Results. Various properties and the performance of the new measures of publication bias are studied and illustrated using simulations and clearly described thought experiments. As a result, individual studies can be reviewed with a higher degree of certainty. Conclusions. Publication bias due to study design is a serious problem in scientific research, which can affect the validity and generalization of conclusions. The index of unfairness and the index of independence are of use to quantify publication bias and to improve the quality of systematic reviews and meta-analyses.


2015 ◽  
Vol 34 (20) ◽  
pp. 2781-2793 ◽  
Author(s):  
Michal Kicinski ◽  
David A. Springate ◽  
Evangelos Kontopantelis

2016 ◽  
Vol 123 (4) ◽  
pp. 1018-1025 ◽  
Author(s):  
Riley J. Hedin ◽  
Blake A. Umberham ◽  
Byron N. Detweiler ◽  
Lauren Kollmorgen ◽  
Matt Vassar

Sign in / Sign up

Export Citation Format

Share Document