scholarly journals Estimating the prevalence of missing experiments in a neuroimaging meta-analysis

2017 ◽  
Author(s):  
Pantelis Samartsidis ◽  
Silvia Montagna ◽  
Angela R. Laird ◽  
Peter T. Fox ◽  
Timothy D. Johnson ◽  
...  

AbstractCoordinate-based meta-analyses (CBMA) allow researchers to combine the results from multiple fMRI experiments with the goal of obtaining results that are more likely to generalise. However, the interpretation of CBMA findings can be impaired by the file drawer problem, a type of publications bias that refers to experiments that are carried out but are not published. Using foci per contrast count data from the BrainMap database, we propose a zero-truncated modelling approach that allows us to estimate the prevalence of non-significant experiments. We validate our method with simulations and real coordinate data generated from the Human Connectome Project. Application of our method to the data from BrainMap provides evidence for the existence of a file drawer effect, with the rate of missing experiments estimated as at least 6 per 100 reported.

2017 ◽  
Author(s):  
Freya Acar ◽  
Ruth Seurinck ◽  
Simon B. Eickhoff ◽  
Beatrijs Moerkerke

AbstractThe importance of integrating research findings is incontrovertible and coordinate based meta-analyses have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. Similar to classical meta-analyses, coordinate based meta-analyses may be subject to different forms of publication bias which impacts results and possibly invalidates findings. We develop a tool that assesses the robustness to potential publication bias on cluster level. We investigate the possible influence of the file-drawer effect, where studies that do not report certain results fail to get published, by determining the number of noise studies that can be added to an existing fMRI meta-analysis before the results are no longer statistically significant. In this paper we illustrate this tool through an example and test the effect of several parameters through extensive simulations. We provide an algorithm for which code is freely available to generate noise studies and enables users to determine the robustness of meta-analytical results.


2021 ◽  
pp. 263208432199622
Author(s):  
Tim Mathes ◽  
Oliver Kuss

Background Meta-analysis of systematically reviewed studies on interventions is the cornerstone of evidence based medicine. In the following, we will introduce the common-beta beta-binomial (BB) model for meta-analysis with binary outcomes and elucidate its equivalence to panel count data models. Methods We present a variation of the standard “common-rho” BB (BBST model) for meta-analysis, namely a “common-beta” BB model. This model has an interesting connection to fixed-effect negative binomial regression models (FE-NegBin) for panel count data. Using this equivalence, it is possible to estimate an extension of the FE-NegBin with an additional multiplicative overdispersion term (RE-NegBin), while preserving a closed form likelihood. An advantage due to the connection to econometric models is, that the models can be easily implemented because “standard” statistical software for panel count data can be used. We illustrate the methods with two real-world example datasets. Furthermore, we show the results of a small-scale simulation study that compares the new models to the BBST. The input parameters of the simulation were informed by actually performed meta-analysis. Results In both example data sets, the NegBin, in particular the RE-NegBin showed a smaller effect and had narrower 95%-confidence intervals. In our simulation study, median bias was negligible for all methods, but the upper quartile for median bias suggested that BBST is most affected by positive bias. Regarding coverage probability, BBST and the RE-NegBin model outperformed the FE-NegBin model. Conclusion For meta-analyses with binary outcomes, the considered common-beta BB models may be valuable extensions to the family of BB models.


2021 ◽  
Author(s):  
Matt Tincani ◽  
Jason C Travers

Questionable research practices (QRPs) are a variety of research choices that introduce bias into the body of scientific literature. Researchers have documented widespread presence of QRPs across disciplines and promoted practices aimed at preventing them. More recently, Single-Case Experimental Design (SCED) researchers have explored how QRPs could manifest in SCED research. In the chapter, we describe QRPs in participant selection, independent variable selection, procedural fidelity documentation, graphical depictions of behavior, and effect size measures and statistics. We also discuss QRPs in relation to the file drawer effect, publication bias, and meta-analyses of SCED research. We provide recommendations for researchers and the research community to promote practices for preventing QRPs in SCED.


2020 ◽  
Author(s):  
Sergio Guerra Garcia ◽  
Andrea Spadoni ◽  
Jennifer Mitchell ◽  
Irina A. Strigo

AbstractMolecular mechanisms of the interaction between pain and reward associated with pain relief processes in the human brain are still incompletely understood. This is partially due to the invasive nature of the available techniques to visualize and measure metabolic activity. Positron Emission Tomography (PET) radioligand studies using radioactive substances are still the only available modality to date that allows for the investigation of the molecular mechanisms in the human brain. For pain and reward studies, the most commonly studied PET radiotracers are [11C]-carfentanil (CFN) and [11C]- or [18F]-diprenorphine (DPN), which bind to opioid receptors, and [11C]-raclopride (RAC) and [18F]-fallypride (FAL) tracers, which bind to dopamine receptors. The current meta-analysis looks at 15 pain-related studies using opioid radioligands and 8 studies using dopamine radioligands in an effort to consolidate the available data into the most likely activated regions. Our primary goal was to identify regions of shared opioid/dopamine neurotransmission during pain-related experiences. SDM analysis of previously published voxel coordinate data showed that opioidergic activations were strongest in the bilateral caudate, thalamus, right putamen, cingulate gyrus, midbrain, inferior frontal gyrus, and left superior temporal gyrus. The dopaminergic studies showed that the bilateral caudate, thalamus, right putamen, cingulate gyrus, and left putamen had the highest activations. We were able to see a clear overlap between opioid and dopamine activations in a majority of the regions during pain-related processing, though there were some unique areas of dopaminergic activation such as the left putamen. Regions unique to opioidergic activation include the midbrain, inferior frontal gyrus, and left superior temporal gyrus. By investigating the regions of dopaminergic and opioidergic activation, we can potentially provide more targeted treatment to these sets of receptors in patients with pain conditions. These findings could eventually assist in the development of more targeted medication in order to help treat pain conditions and simultaneously prevent physical dependency.


2012 ◽  
Vol 65 (2) ◽  
pp. 221-249 ◽  
Author(s):  
DAN R. DALTON ◽  
HERMAN AGUINIS ◽  
CATHERINE M. DALTON ◽  
FRANK A. BOSCO ◽  
CHARLES A. PIERCE

1987 ◽  
Vol 11 (2) ◽  
pp. 233-242 ◽  
Author(s):  
Barbara Sommer

The file drawer problem refers to a publication bias for positive results, leading to studies which support the null hypothesis being relegated to the file drawer. The assumption is that researchers are unable to publish studies with nonsignificant findings. A survey of investigators studying the menstrual cycle showed this assumption to be unwarranted. Much of the research did not lend itself to a hypothesis-testing model. A more important contribution to the likelihood of publication was research productivity, and researchers whose first study was published were more likely to have continued their work.


1997 ◽  
Vol 85 (2) ◽  
pp. 719-722 ◽  
Author(s):  
M. T. Bradley ◽  
R. D. Gupta

Although meta-analysis appears to be a useful technique to verify the existence of an effect and to summarize large bodies of literature, there are problems associated with its use and interpretation. Amongst difficulties is the “file drawer problem.” With this problem it is assumed that a certain percentage of studies are not published or are not available to be included in any given meta-analysis. We present a cautionary table to quantify the magnitude of this problem. The table shows that distortions exaggerating the effect size are substantial and that the exaggerations of effects are strongest when the true effect size approaches zero. A meta-analysis could be very misleading were the true effect size close to zero.


Sign in / Sign up

Export Citation Format

Share Document